On January 7th, Mark Zuckerberg, the CEO of Meta, the company that owns Facebook, Instagram, and Threads, announced changes to content moderation policies across all its platforms. The most notable update is switching from a third-party fact-checking program to a system similar to the Community Notes, used by the platform X (formerly known as Twitter). The entrepreneur stated that this decision was caused by his desire to reduce censorship and return to true free speech. Additionally, Zuckerberg plans to lift restrictions on political content introduced in 2021, reintroducing political and activist posts for users who engage actively with such content.
On X, users can add notes under posts to highlight inaccuracies or provide additional information. These notes become visible to a wider audience only after being rated by other users. Meta has not yet released the exact details of how this system will operate on its platforms, but based on available information, it is expected to follow a similar logic.
Example of community notes on social network X
Mixed Reactions
The company’s new policy has sparked heated debate among both global cybersecurity experts and ordinary users, as large-scale changes to information management algorithms can have significant consequences for the security of the platform’s users.
Some observers praised the changes. Cenk Uygur, founder of the Young Turks digital news channel, thinks that this step could help reduce the influence of the so-called “old media” that, according to his argument, “used fact-checking to suppress information instead of supporting objectivity”. Randy Weber, a Republican Party Representative from Texas, called Meta’s policy change a long-awaited opportunity for Americans to “make their own decisions.”
Elon Musk also commented on the news, calling the shift in Meta’s moderation policies “cool” in a post from his X account.
However, not everyone supported the change. Neil Brown, president of the Poynter Institute for Media Studies, criticized Zuckerberg for “demonizing” fact-checking and exaggerating its role in censoring content on Meta platforms. He stated, “Facts are not censorship. Fact-checkers have never censored anything. It’s time to stop using provocative and misleading terminology to describe the role of journalists and fact-checking.”
Maxym Savanevsky, CEO of the communications agency PlusOne, expressed concerns that replacing fact-checking could lead to chaos and sharp societal polarization, especially in the United States.
The biggest concern raised by experts is the effectiveness of the community notes system and its capacity to handle the volume of posts generated by Meta’s users. While community notes have the potential to create a more objective and “free” environment, studies on this approach within X have revealed significant shortcomings. For instance, the approval process for notes is often slow, meaning that substantial time can elapse between the spread of potentially false content and its correction by the community. Moreover, valuable notes might not be published due to a lack of consensus among users evaluating them. Valerie Wirtschafter from the Brookings Institution argued that launching such a system without prior testing is irresponsible on Meta’s part.
Possible Motives Behind Zuckerberg’s Decision
Mark Zuckerberg’s motivations for the policy change have also been a topic of debate. While the entrepreneur and his team cite a desire to promote free speech, some experts believe the decision is rooted in pragmatism. Ihor Rozkladay, deputy director of the Center for Democracy and Rule of Law, suggested that Meta was driven to make these changes due to algorithms, which it uses for content moderation, being overworked. In his opinion, the company relied too much on them, and as the volume of automatically filtered content grew too large, it began to harm the company.
Others link the changes to Donald Trump’s victory in the presidential election. Observers note a sudden improvement in relations between Trump and Zuckerberg, which followed earlier threats from Trump to imprison Meta’s CEO if he were to meddle in U.S. elections.
Nu Wexler, a communication policy consultant who has worked with Facebook, Twitter, and Google, offered another perspective: “Fact-checking on social media works in theory, and platforms trying to implement it deserve more recognition. But such a policy is almost impossible in the current (political) climate, which has become a painful issue for Capitol Hill companies.”
Meta founder Mark Zuckerberg at Donald Trump’s inauguration, January 20, 2024
Risks and Challenges
Whatever Zuckerberg’s motivations, the decision to abandon the third-party fact-checking program is highly controversial. On one hand, reducing algorithmic moderation could indeed lower the amount of content that is being unfairly “censored” and create a positive dynamic for accounts unjustly placed in “shadow bans.” This may be particularly beneficial for Ukrainian and pro-Ukrainian organizations and bloggers, who often lose visibility due to algorithms flagging their content on the Russia-Ukraine war as “politicized.”
However, the number of harmful and abusive posts previously filtered out by strict controls could significantly increase. Casey Newton, technology journalist and founder of Platformer, shared a quote from a former Meta employee: “I can’t overstate how much damage non-illegal but harmful content can do. It’s humiliating, awful content that leads to violence and is created to harm people deliberately.”
Another challenge is combating AI-generated content. Effectively filtering such material within the new system will heavily depend on users’ ability to identify AI-generated posts. A 2022 online study involving 3,000 participants from the U.S., Germany, and China found that participants’ ability to accurately identify AI-generated materials did not exceed 50%. This trend highlights the risks of using AI to manipulate public sentiment and spread disinformation on social media.
The intensification of hybrid aggression on social media is another looming threat. Russian propaganda has been known to use bot farms to manipulate public sentiment and engage in political interference. While community notes may reduce unjust censorship caused by algorithmic errors or subjective evaluations of content “bias,” their ability to filter disinformation and propaganda remains questionable.
Alex Mahadevan, director of MediaWise, a digital literacy initiative at the Poynter Institute, expressed concerns about the Community Notes system’s reliance on user consensus. Such a system may not scale effectively for large platforms, and there’s a risk of user groups deliberately manipulating notes to promote specific narratives, regardless of their accuracy or objectivity.
__
Overall, while the community notes system has certain advantages, it poses significant risks to the modern digital landscape. The lack of clear crisis management protocols and tools to counter hybrid aggression and AI-generated content raises serious concerns, opening new “digital doors” for authoritarian regimes and criminal organizations.
By Viktoriia Odusanvo