
In a move that has sparked alarm among human rights advocates, researchers, and policymakers, Meta has significantly altered its content moderation policies, a shift that critics argue could have severe global repercussions. The changes, which involve the elimination of third-party fact-checking programs and a move toward community-driven moderation, have drawn widespread condemnation. Experts warn that the company’s decision could facilitate the spread of hate speech, misinformation, and incitement to violence, threatening efforts to promote sustainable development, social justice, and human rights worldwide.
A dangerous shift in policy
On 7 January 2025, Meta CEO Mark Zuckerberg announced a sweeping overhaul of the company’s content moderation policies. Under the new framework, the tech giant will no longer work with third-party fact-checkers to verify the accuracy of content on its platforms, including Facebook and Instagram. Instead, the company will rely more heavily on AI-driven systems and community-based moderation, a strategy that has been widely criticized for its susceptibility to manipulation and bias.
The move comes amid a growing global debate over the responsibility of tech companies in curbing online harms. In recent years, Meta has faced intense scrutiny over its role in amplifying misinformation, fueling political polarization, and failing to curb hate speech. Critics argue that the latest changes will further erode the company’s already fragile safeguards, making it easier for bad actors to exploit the platform.
“This is a dangerous step backward,” said David Kaye, a former United Nations Special Rapporteur on Freedom of Expression. “Meta’s decision to abandon fact-checking will allow disinformation to flourish unchecked. The consequences could be devastating, particularly in regions with weak institutions and ongoing conflicts.”
A history of failing to prevent harm
Meta’s track record on content moderation has been marred by multiple controversies, some of which have had deadly consequences. Perhaps the most glaring example is the platform’s role in the 2017 Rohingya crisis in Myanmar. During that period, Facebook became a conduit for virulent anti-Rohingya hate speech, which human rights organizations argue helped incite mass violence against the persecuted minority group. A 2018 United Nations report concluded that Facebook had played a “determining role” in the atrocities, failing to act despite clear warnings from human rights observers.
“The lessons from Myanmar should have been a wake-up call,” said Deborah Brown, a researcher with Human Rights Watch. “Instead, Meta seems to be abandoning even the minimal safeguards that were put in place in the aftermath.”
Despite public apologies and promises to do better, Meta has continued to struggle with content moderation, particularly in non-English-speaking regions where it has limited staff and resources dedicated to identifying harmful content. Reports from watchdog organizations indicate that hate speech, incitement to violence, and extremist propaganda remain widespread on the platform in countries such as Ethiopia, India, and Brazil.
Global implications and threats to sustainable development
The ramifications of Meta’s policy shift extend far beyond the digital sphere, with potential consequences for global governance, social cohesion, and human rights. By dismantling its fact-checking mechanisms, Meta is not only enabling the spread of disinformation but also undermining key global efforts to combat climate change, economic inequality, and gender-based violence.
The United Nations’ Sustainable Development Goals (SDGs), which seek to promote peace, justice, and strong institutions (SDG 16), as well as affordable and clean energy (SDG 7), are particularly at risk. Misinformation about climate change, for instance, has already been a significant obstacle to coordinated global action. With Meta loosening its moderation policies, climate denialism and corporate greenwashing campaigns could see a resurgence, further delaying urgent policy measures.
Similarly, human rights activists fear that the changes could exacerbate gender-based violence and discrimination. Online harassment, doxxing, and hate speech targeting women and LGBTQ+ individuals have been persistent problems on Meta’s platforms. Without strong moderation, these threats are likely to escalate, discouraging marginalized voices from participating in public discourse.
“Meta is essentially giving hate groups a green light,” said Julie Owono, executive director of the digital rights group Internet Without Borders. “We will see more targeted harassment, more disinformation campaigns, and more real-world harm.”
Regulatory responses and calls for action
Governments and regulatory bodies have already begun responding to Meta’s policy shift, with European and U.S. lawmakers warning that the company could face legal and financial consequences if it fails to mitigate harm.
The European Union’s Digital Services Act (DSA), which was introduced to hold tech companies accountable for harmful content, could become a key battleground. Under the DSA, platforms like Meta are required to take proactive steps to curb illegal and harmful content. If Meta is found to be in violation, it could face hefty fines or even restrictions on operating in the European market.
In the United States, members of Congress have renewed calls for greater oversight of social media platforms, including potential legislation that would impose stricter liability rules for content amplification. Advocacy groups are also pushing for advertisers to reconsider their partnerships with Meta, arguing that companies should not be supporting a platform that enables hate and disinformation.
“We are at a crossroads,” said Senator Elizabeth Warren, a vocal critic of Big Tech. “Either Meta is held accountable for the content it amplifies, or we risk a future where disinformation and hate speech dictate global narratives.”
A critical moment for online governance
Meta’s decision to overhaul its content moderation policies represents a major inflection point in the ongoing debate over digital governance and corporate responsibility. The company’s move away from fact-checking and human oversight raises fundamental questions about the role of social media in shaping public discourse and the broader implications for democracy and human rights.
As regulatory bodies, civil society organizations, and the international community grapple with these challenges, one thing remains clear: the stakes could not be higher. If Meta continues down this path without meaningful safeguards, it risks becoming a vehicle for division, violence, and the erosion of trust in democratic institutions. The responsibility now falls on policymakers, human rights advocates, and the public to demand accountability and ensure that technology serves as a force for progress, rather than a tool for harm.