Elections 2024: 'Meta approved political ads that promoted hate speech, contained known slurs towards Muslims'

Between May 8 and May 13, Meta approved 14 highly inflammatory ads that called for violent uprisings targeting Muslim minorities, and incited violence through Hindu supremacist narratives.
Prime Minister of India Narendra Modi seen with Facebook CEO Mark Zuckerberg.
Prime Minister of India Narendra Modi seen with Facebook CEO Mark Zuckerberg. (File Photo | AP)

The Facebook and Instagram owner Meta has failed to detect and block ads containing AI-generated images promoting hate speech, election disinformation, and incitement to violence, according to a recent research carried out by corporate accountability group, Ekō, in collaboration with India Civil Watch International.

The report shared exclusively with The Guardian said that Facebook approved adverts containing known slurs towards Muslims in India, such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned”, as well as Hindu supremacist language and disinformation about political leaders.

According to the report, shared by The Guardian, these alarming findings emerge in the midst of India’s critical elections. Researchers have already uncovered a network of bad actors weaponizing Meta ads to spread hate speech and disinformation to millions of voters in India, with Meta directly profiting.

According to the report, between May 8 and May 13, Meta approved 14 highly inflammatory ads. These ads called for violent uprisings targeting Muslim minorities, disseminated blatant disinformation exploiting communal or religious conspiracy theories prevalent in India's political landscape, and incited violence through Hindu supremacist narratives.

One approved ad also contained messaging mimicking that of a recently doctored video of Home Minister Amit Shah threatening to remove affirmative action policies for oppressed caste groups, which has led to notices and arrests of opposition party functionaries.

Accompanying each ad text were manipulated images generated by AI image tools, proving how quickly and easily this new technology can be deployed to amplify harmful content.

The report noted that before India’s election, Meta promised that it would prioritize the detection and removal of violative AI generated content, recognizing “the concerns around the misuse of AI-generated content to spread misinformation”.

However, Meta's approval of inflammatory ads, coupled with its failure to detect or label any of the ads in this investigation as AI-generated content, underscores that the platform is ill-equipped to deal with AI-generated disinformation. Despite assurances of safeguards to ensure responsible use of new technologies like generative AI and investments in third-party fact-checkers, the reality paints a different picture.

Moreover, the platform's reactive approach to disinformation and its inability to effectively address and label AI-generated content highlight systemic shortcomings in its content moderation. Meta has publicly boasted about the company’s large team of content reviewers as well as significant investments in safety and security.

However, for years civil society, whistleblowers, and experts have warned that Meta's moderation practices are inadequate in identifying and addressing harmful content, especially moderating content in languages other than English, as well as allegations of political bias to the ruling BJP, the report noted.

Despite ample evidence of systemic failures and tangible harms documented over the years, Meta has failed to implement substantial corrective measures, the report said.

Concerns about the erosion of India's democracy, and the success of far-right and anti-democratic actors in exploiting Meta’s platforms, have prompted alarm among both Indian and international civil society groups.

Meta's failure to safeguard elections undermines decades of efforts by citizens, policymakers, and courts in India to promote transparent and accountable democratic practices. By facilitating the dissemination of election disinformation and conspiracy theories, Meta has enabled groups to sow discord and, at times, incite real-world violence, as evidenced in recent events in the US and Brazil. India has also suffered from the violent consequences of disinformation. In 2020, over 50 people, the majority of whom were Muslims, were killed in riots that erupted in Delhi, with Facebook having fueled the hate narratives and violence, the report pointed out.

The Guardian reminded that a previous report by ICWI and Ekō found that “shadow advertisers” aligned to political parties, particularly the BJP, have been paying vast sums to disseminate unauthorised political adverts on platforms during India’s election. Many of these real adverts were found to endorse Islamophobic tropes and Hindu supremacist narratives. Meta denied most of these adverts violated their policies.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com