For the past year thousands of social media influencers, and regular users of the platform Instagram have been falsely banned. Instagram’s new Meta Ai has flagged various accounts for “violating community standards”, even though most pages were following all key guidelines. This new Meta AI has been misidentifying various users’ content they publish, and it has become a genuine problem affecting millions of people. Many people are enraged by this as it is also their main source of income, and these false and unaccounted for bans stop their main revenue source.
For years, Instagram and many platforms have been trying to search for a new automated way to detect violations against their community guidelines. But at what cost? This new use of Meta Ai doesn’t only stop on Instagrams platform, but it is also used on other platforms owned by Meta including Facebook. This Ai is intended to automatically flag threats, abusive behavior, hate speech, child sexual exploitation, and limit all integrity. Yes, the intended job of the new Ai is great; the results we have seen are utterly terrible. As the Ai makes the moderation job easier, it also misidentifies a lot of information, leading to various false bans, and due to the Ai support most people don’t get the help needed to retrieve their falsely banned accounts.
There have been thousands of stories claiming to have been falsely banned by this new Meta Ai, causing job losses, and even downfalls in people’s reputations due to these bans. As all of these accusations arise, people have to wonder where the Meta Support Team is. Meta has yet to come to the public or speak out loud about the absurd amounts of false bans happening on their platforms, which shows the lack of concern. There have been various bans over the span of a year and not once has the founder or anyone of high authority spoken out to ease the platform users’ nerves. When people do encounter support, they all tend to realize they are led to the same unhelpful responses that feel “Ai generated.”
This new Ai system is also prone to many biases, as the automated bans target many family channels and lifestyle influencers that spotlight their underage children, as it is deemed “CSE” (Child Sexual Exploitation). This is very challenging as most family influencers have been using social media platforms for revenue, and these false bans halt their growth. Not to mention the terrible support and the length it takes just to file an appeal, that doesn’t guarantee your account being reinstated.
Many believe this is just a scheme to force users to buy their new subscription called “Meta Verification”. People believe this as being Meta Verified is the only way you can speak to a real human support in live action, but the support workers have been known to give “Ai Responses”. This all leads us to believe that the false bans are just a plot to have users buy their Meta Verification.