Meta has exempted some of its leading advertisers from standard content moderation methods. It did so to protect its multi-billion dollar business, fearing that its automated moderation systems could unduly penalize leading brands, the Financial Times (FT) reports.
According to internal Meta documents seen by the FT, Facebook and Instagram have introduced “safeguards” that benefit customers who spend significant amounts on the platforms. These safeguards override the platforms’ automated moderation tools, with leading advertisers’ posts reviewed only by humans. For example, there is a term for “P95 buyers,” which refers to customers who spend more than $1,500 a day. Such customers are exempt from automated moderation systems, and their posts are sent to manual human review. The revelations came before Meta CEO Mark Zuckerberg said this week that the company would overhaul its fact-checking program and relax the rigor of its automated moderation tools.
Meta found that its automated moderation systems were wrongly flagging accounts with high ad spend as violating social media rules, according to internal documents from 2023. The number of false positives from its automated moderation systems was disproportionately high for high-spending accounts, Meta told the FT, but it did not say whether any of the “safety mechanisms” were temporary or permanent. A Meta spokesperson called the FT’s findings “simply inaccurate” and “based on a selective reading of documents that make clear what we have said publicly: preventing errors in the application of the rules.” Advertising revenue makes up a significant portion of Meta’s revenue, which was nearly $135 billion in 2023.
Safeguards for High-Spending Advertisers
In an effort to keep fraudulent or malicious content out of the platform, the company screens ads using AI-powered moderation tools and human reviewers. An internal Meta document says it has seven mechanisms in place to protect business accounts that generate more than $1,200 in revenue over 56 days and individual accounts that spend more than $960 on advertising over the same period. It also says the mechanisms help the company decide what to do when it receives signals from its AI moderation systems — they’re designed to “suppress” signals based on criteria like ad spend, adds NIXsolutions.
Some accounts with high ad spend are being removed from automated moderation and handed over to specialized teams, Meta acknowledged, but emphasized that no advertiser is exempt from the obligation to comply with the rules in force on the platforms. The categories of protective barriers for high-spending advertisers are divided into “low,” “medium,” and “high” in terms of how “justified” they are. Spend-based mechanisms have a “low” priority, while the advertiser’s business reliability criterion corresponds to a “high” criterion. Given the thresholds specified in the documents, several thousand advertisers could have been exempted from standard moderation tools. According to Sensor Tower analysts, the top 10 advertisers on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal, and Google.
Proposed Changes and Ongoing Efforts
In one of the documents, Meta employees propose to more actively offer protection from AI moderation to clients classified as “platinum and gold buyers” of advertising — excessive strictness of automated systems, in their opinion, “costs Meta income and harms our reputation.” They proposed removing these high-value advertisers from a number of automated systems except for “very rare cases.” However, they did not completely exempt the category of “platinum and gold” clients from moderation — in 73% of cases, the actions of automated systems were considered justified.
Meta’s approach is evolving, and we’ll keep you updated on any further developments. So far, the company has not clarified which of its “safety mechanisms” may be temporary or permanent. Nevertheless, Meta continues to emphasize that no advertiser is completely exempt from existing rules.