Democratisation of deception in year of elections
Express illustration | Sourav Roy

Democratisation of deception in year of elections

By allowing the mass production of customised realities, generative AI makes electoral systems vulnerable. Democracies can learn from Taiwan and EU in addressing the threat

Over four billion—that’s the staggering number of individuals around the world who would be eligible to vote in 2024, a year brimming with elections. Traditionally, such widespread participation would be a democratic triumph. Yet, a dark cloud hangs over this year’s pivotal contests—the chilling vulnerability of democracies in the age of artificial intelligence (AI).

The vulnerabilities? Not relics of the past. Outdated voter registration systems, electronic voting machines and election management software—these are the gaping holes waiting to be exploited. But the threats have evolved. Social media, as we know, shattered the gatekeepers of information. Anyone could publish, dissemination costs plummeted. Generative AI takes this a step further. It’s not just dissemination that’s near-free; content creation itself approaches zero cost. Sophisticated content, once the domain of specialists, can be churned out with frightening ease. This is the second shift—the democratisation of deception, with profound implications for our information landscape.

The ever-prescient Jonathan Swift wrote, “Falsehood flies, and Truth comes limping after it.” Academics seem to have proven this with social media; people are more likely to share falsehoods, perhaps for their novelty or surprise. AI has the potential to supercharge the problem, making content production and propagation automatic, faster and easier. Researchers at Indiana University revealed a botnet of over 1,100 Twitter accounts seemingly operated using ChatGPT in July last year.

AI’s reach extends beyond visuals. It can churn out mountains of synthetic text, fabricate articles, and create a seemingly endless army of fake social media accounts. This is now a world where political discourse isn’t a clash of ideas but a cacophony of bots parroting lies to each other.

AI can also effortlessly generate event-based media. Days before a Taiwanese election, every phone buzzed with an air-raid alert triggered by a Chinese satellite. Within 24 hours, Taiwan AI labs observed over 1,500 coordinated social media posts promoting conspiracy theories about the alert, sowing distrust. As many as five posts appeared per minute, many more readable than the typical mill fare.

The decentralised online landscape compounds the danger. Users are migrating from monolithic platforms like Facebook to federated social media networks like Mastodon. While fragmentation offers advantages, it creates breeding grounds for misinformation. Each new platform becomes a new frontier for manipulation, making content policing more daunting.

But the most dangerous fear is its ability to become intimate with users. In the age of digital loneliness, the biggest trump card for generative AI is it promises to act as advocate and butler. Unlike your search engine, email or cloud storage, AI requires a level of intimacy that goes beyond utility. Imagine a constant companion, working tirelessly in your ‘best interests’.

Imagine AI-powered chatbots forging intimate relationships with unsuspecting individuals, subtly shaping their political views. Loneliness, a growing social epidemic, could be exploited to turn individuals into unwitting pawns. Disinformation wouldn’t just be broadcast; it would masquerade as friendship. Hannah Arendt, writing on totalitarianism, described loneliness as a permanent state cultivated by isolation and terror. Totalitarian regimes used this to create a fertile ground for ideological propaganda. During the 2020 US election, the Internet Research Agency linked to the Russian government reached out to targets like Black Lives Matter activists to offer online support and funding.

Truth has always been contested, but AI allows for the mass production of customised realities. Distinguishing fact from fiction becomes an uphill battle when falsehoods are tailored to resonate with individual biases and anxieties. It’s not just about fighting fake news; it’s about combating the erosion of a shared understanding of reality itself.

So how do we fortify our democracies? Lessons can be learned from Taiwan. Recognising their vulnerability to Chinese interference, Taiwan adopted a ‘pre-bunking’ strategy. They openly discussed the potential for deepfakes and educated public on how to identify them. By pre-bunking before deepfakes fell into the wrong hands, they inoculated the public. Taiwan’s president even filmed himself being deepfaked to demonstrate the ease of manipulation. Pre-bunking takes time, and Taiwan’s repeated messaging throughout 2022 and 2023 yielded results. By 2024, when deepfakes did appear, they had minimal effect due to the public’s built-up “antibodies”.

The second challenge is transparency in training data used for generative AI systems. Access to this is crucial for effective defence. Legislation like the EU’s Digital Services Act and AI Act, the UK’s Online Safety Act, and India’s Digital Personal Data Protection Act—all intended to regulate AI—will come into effect after the elections, after significant damage is done.

If the internet was the most audacious experiment in anarchy and succeeded beyond the wildest imagination of its progenitors, the US’s Defense Advanced Research Project Agency, AI is the new frontier as far as tech dominance over the human species is concerned. This is not to argue that the human race should be petrified of the innovation; every technological advance has different implications. Through the use of AI’s large language models, a vigorous attempt is being undertaken to replicate the human brain, perhaps the most complex organ among the millions of species that inhabit Earth.

The call for universally agreed rules of engagement across global tech space is growing louder by the minute. There must be a set of common principles that undergirds country-specific legislation. The EU’s AI Act provides guidance in the principles that form the substratum of that legislation. They address the risks created by AI applications, prohibit practices that pose undesirable risks, define a list of high-risk applications, set clear imperatives for AI systems for high-risk applications, define the obligations of those who provide high-risk AI applications, require compliance assessment before a given AI system is put into service, and establish a governance structure at European and national levels.

Finally, the fight against AI manipulation cannot be shouldered solely by governments and tech companies. The role of civil society organisations and independent fact-checking would be crucial, too.

However, the moot question is: would it be enough given the way AI is mutating?

(Views are personal)

Manish Tewari | Lawyer, MP and former I&B minister

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com