Free speech or free fall? What the end of Meta's fact-checking partnership portends

Meta CEO Mark Zuckerberg announced big changes to the company’s moderation policies on Tuesday, saying they were needed due to changing political and social conditions and to focus on free speech.
Representative Image.
Representative Image.
Updated on
7 min read

Meta is handing over the keys of the kingdom to the users. Is this going to aid free speech or push us all into a free fall? A raging debate has begun.

The idea that has been unleashed is of a digital landscape where every post or video - even those propagating rumours, half-truths, and falsehoods - is only gatekept by other users. How safe will it turn out to be? CEO Mark Zuckerberg's recent announcement to end his social media giant's fact-checking partnership with trusted organisations has thrown up this billion-dollar (or is it a potential trillion-dollar?) question.

Zuckerberg’s move, which according to him aims to embrace free speech, shifts responsibility from moderators to the community itself, a model similar to that of X's Community Notes.

While it promises more freedom, the risk of widespread misinformation looms large - pushing us to confront the uncomfortable question: what happens when fact-checking becomes entirely user-driven?

'More speech, fewer mistakes'

Zuckerberg announced the big changes to the company’s moderation policies on Tuesday, saying they were needed due to changing political and social conditions and to focus on free speech.

He said Meta, which owns Facebook and Instagram - two of the biggest social media platforms, would stop using its fact-checking program with trusted partners and instead introduce a system where the community can help, similar to X’s Community Notes.

“We will end the current third-party fact-checking program in the United States and instead begin moving to a Community Notes program. We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see,” Meta said in a statement.

Meta will also update its rules on political content and bring back more political posts to user feeds after reducing them in the past.

"We're going to get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms," Zuckerberg said in a video.

These changes will affect Facebook, Instagram, and Threads, which are used by billions of people worldwide. Meta has said it has “no immediate plans” to remove fact-checkers outside the US, however, the rest of the planned changes will be implemented worldwide.

“We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing - and one that’s less prone to bias,” the Meta statement further read.

In the video, Zuckerberg further went on to say that the US election was a major influence on the company's decision and criticised "governments and legacy media" for, he alleged, pushing "to censor more and more."

So, what is this Community Notes model?

What began as Birdwatch

On X, community notes are added by users to provide extra context to posts that might be misleading, lack important details, or need further explanation. These notes appear under the post with a label saying, "Readers added context."

They usually include a brief explanation and often link to sources for support. Other users can vote on whether the added context is helpful or should be removed.

Community Notes began in 2021 under the name "Birdwatch," before Elon Musk’s leadership. Since then, it has expanded to users in 44 countries.

X users need to sign up for the Community Notes feature on the platform. Before they can write notes, they must first rate notes written by others, helping decide which ones are useful and which should be removed.

Once approved to write notes, users risk losing the privilege if their contributions are rated as "unhelpful" often.

X explains that decisions aren’t based purely on majority votes. The algorithm values agreement among users who typically disagree, aiming to prevent what it calls "manipulation."

However, some posts on X may end up flooded with community notes, not always for factual reasons but due to differing opinions.

An AFP report highlights the lack of solid scientific analysis on the effectiveness of Community Notes. A study published in April 2024 in the Journal of the American Medical Association reviewed 'Community Notes addressing misinformation about COVID-19 vaccines'.

The study found that the notes were accurate, cited credible sources, and were attached to widely viewed posts. However, it did not evaluate how these notes influenced users.

Separately, Alexios Mantzarlis, a digital harm researcher at Cornell University, surveyed Community Notes on US election day, November 5, 2024. He found that only 29 per cent of posts deemed “fact-checkable” had notes rated as helpful.

Writing for the Poynter Institute, Mantzarlis observed, “If Community Notes had any impact on election information quality on X, it was minimal at best.”

Why the sudden change

The sudden change in stance by Meta brings forth a question: why now?

The timing is no coincidence.

In a video shared on Instagram, Zuckerberg stated, “Recent elections also feel like a cultural tipping point towards once again prioritising speech.” He also criticised fact-checkers, claiming they have been “too politically biased.”

In a post on Threads, Meta’s alternative to X, Mark Zuckerberg pledged to reduce “censorship mistakes”, echoing claims by US conservatives that Facebook and Instagram unfairly target their views/allegations with little supporting evidence.

He criticised "legacy media" for promoting increased censorship of Trump and admitted that Meta’s earlier moderation policies resulted in “too much censorship” and “went too far.”

The announcement reflects what seems to be a broader shift to the right within Meta's leadership and comes as Mark Zuckerberg works to strengthen ties with Donald Trump ahead of the president-elect's inauguration later this month.

Zuckerberg also said that Meta would “get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse” and “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more.”

Earlier, Meta revealed that Trump ally and UFC CEO Dana White would join the board alongside two other new directors. The company also announced a USD 1 million donation to Trump’s inaugural fund and expressed Zuckerberg's intention to play an “active role” in tech policy discussions.

Joel Kaplan, a prominent Republican recently promoted to Meta’s top policy role, acknowledged that the announcement was influenced by the incoming administration.

“There’s no question that things have changed over the past four years,” Kaplan said. “We’ve faced significant societal and political pressure pushing towards more content moderation and censorship. Now, with a new administration that values free expression, we have a real opportunity.”

According to CNN, Meta informed Trump’s team about the moderation policy changes in advance.

“I watched their news conference, and I thought it was a very good news conference. I think they’ve, honestly, I think they’ve come a long way. Meta. Facebook. I think they’ve come a long way. I watched it, the man was very impressive,” Trump said in response to a question at a press conference

When asked if Trump thought the decision by Meta was a direct response to threats Trump has made to Zuckerberg in the past, “Probably. Yeah, probably,” Trump said.

Reacting to the change, X CEO Elon Musk posted on X, saying, "This is cool."

The change marks a significant departure from Meta's seven-year-old fact-checking system, which relied on approximately 80 independent organisations globally to verify content on Facebook and Instagram.

‘World that’s right for a dictator’

“What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far,” Zuckerberg said.

However, he admitted there would be a “tradeoff” with the new policy, acknowledging that the changes in content moderation would lead to more harmful content appearing on the platform.

Joel Kaplan, Meta’s recently appointed Chief of Global Affairs, told Fox that the company’s collaborations with third-party fact-checkers were “well-intentioned at first” but had been undermined by “too much political bias in what they decide to fact-check and how they do it.”

Reacting to the incident, Nobel Peace Prize winner Maria Ressa has warned that Meta's decision to end third-party fact-checking and lift restrictions on certain topics could usher in “extremely dangerous times” for journalism, democracy, and social media users.

The American-Filipino journalist criticised Mark Zuckerberg's move to ease content moderation on Facebook and Instagram, which she said would lead to “a world that’s right for a dictator.”

“Mark Zuckerberg says it’s a free speech issue – that’s completely wrong,” Ressa told AFP. “Only if you’re profit-driven can you claim that; only if you want power and money can you claim that. This is about safety.”

Ressa vowed to do everything in her power to “ensure information integrity.” She emphasised, “This is a pivotal year for journalism survival. We’ll do all we can to make sure that happens.”

Concerns about Meta’s policies are not new. In October, Amnesty International reported that authorities in the Philippines were using Facebook to “red-tag” young activists – a practice where individuals are labelled as “communist rebels” or “terrorists.”

In 2021, Meta whistleblower Frances Haugen raised concerns about the lack of safety controls in non-English-speaking regions such as Africa and the Middle East. She alleged that Facebook was being exploited by human traffickers and armed groups in Ethiopia.

“I did what I thought was necessary to save the lives of people, especially in the global south, who I think are being endangered by Facebook’s prioritisation of profits over people,” Haugen told The Observer.

At the time, Meta, then operating as Facebook, denied prioritising profits over safety, stating the claim was “false.” The company highlighted its $13 billion (£11 billion) investment in user protection measures.

In 2018, following the massacre of Rohingya Muslims by Myanmar's military, Facebook acknowledged that its platform had been used to “foment division and incite offline violence.”

However, three years later, Global Witness accused Facebook of promoting content that incited violence against political protesters in Myanmar. In response, Facebook claimed it had proactively detected and removed 99 per cent of hate speech content from the platform in the country.

Uncharted territory

As Meta moves forward with these changes, the potential consequences for journalism, democracy, and social media are becoming increasingly apparent. The shift towards community-driven content moderation could foster more freedom, but it also raises significant concerns about the spread of misinformation and the absence of reliable fact-checking.

As Ressa warned, this shift could pave the way for a "world without facts," where unchecked content dominates. While Meta maintains its focus on free speech, the trade-offs in terms of truth and accountability remain uncertain.

As the changes roll out globally, the world will be watching closely to see how this new approach affects the integrity of online information and its impact on users worldwide.

Representative Image.
No more fact-checking for Meta. How will this change media — and the pursuit of truth?

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com