Discerning truth becomes a tough task with rise of Artificial Intelligence

For an individual’s own safety, it is advisable to become one’s own watchdog. Digital literacy serves as a defense mechanism, providing protection against risks associated with misinformation and data privacy breaches.
Mohan Gandhi, Cybersecurity expert and Entersoft Security founder
Mohan Gandhi, Cybersecurity expert and Entersoft Security founderPhoto | Express

VISAKHAPATNAM: Scrolling through his news feed with a frown, a first-time voter seated in the politically charged milieu of a local tea shop remarks, “Every time I come across something online these days, it makes me question its authenticity.”

Rao, who has voted in every election since he was 18, leans in closer, squinting at the screen. “Isn’t that clear? It looks pretty real to me,” he responds, as a political jingle from a van fades into the background.

Such exchanges underscore the dual-edged nature of revolutionising technology, particularly in campaigning. While the rise of Artificial Intelligence (AI) can help amplify messages and connect to more voters than ever, it also opens the floodgates to misinformation, turning the task of discerning the truth into navigating a labyrinth. As voters in Andhra Pradesh head to the polls, legal experts are raising an alarm on deep fakes — manipulative machine learning tools that can create convincingly false content to impersonate someone. They stress that in the age of generative AI, informed electoral decision-making requires not only access to information but also the vigilance to verify its validity.

For instance, a controversial audio clip spread on social media, allegedly featuring Nara Bhuvaneswari berating staffers, triggered a debate recently. While TDP leaders and supporters denounced it as a politically motivated deep fake, some independent fact-checkers insisted the audio was genuine. This incident is just one of many that highlights the confounding and often misleading scenarios an average voter this election season is facing with fast-evolving artificial intelligence.

Explaining AI’s potential to interfere in elections and ways to tackle deep fakes, cybersecurity expert and Entersoft Security founder Mohan Gandhi notes that moral panic induced by synthetic chaos can indeed create last-minute movements. He believes the answer to counter this novel risk of electoral misinformation, particularly deep fakes, lies in human-centered AI, which is not at all a black-and-white area.

Pointing out the technical complexity of regulating AI beyond a local context, as interpretations of fairness and morale greatly differ, he asserts that every tool mirrors the ethical behaviour of its developer and end user.

Tech companies’ lack of requirements for explicit consent and stringent filtering at the software development level, combined with users’ abuse of these tools, forces reliance on other detection technologies and community vigilance to spot, report, and take down manipulated content. This process, however, can be hit or miss. Mohan details, inexpensive and widely accessible AI tools have made it possible for almost anyone with internet access to create doctored images, videos, robocalls and voice clones. While videos often have subtle signs like discrepancies in audio-video sync and facial anomalies that can signal manipulation, voice-cloning technology has progressed to such a degree of sophistication that distinguishing these imitations from genuine recordings is becoming exceedingly difficult.

He says even the AI detection tools often fall short of understanding regional languages and contextual nuances, despite their skills to analyze fake content for inconsistencies at a much more granular level than a human eye.

While IT Rules provide a framework to address manipulated content, the effectiveness, according to experts, largely hinges on the operational capabilities of the intermediaries (social media platforms). Because the verification of this reported/flagged content again requires human moderators or AI detection tools.

Once an individual’s personal information is ‘out there’, it opens a Pandora’s box. The synthetic chaos just muddies the waters once someone is caught off guard.

Mohan asserts the government needs to strictly enforce clear consent mechanisms and ensure tech companies comply, allowing users to control their personal information at the source. He notes that some tools already function this way.

Additionally, he believes the government could help combat online misinformation by collaborating with major tech companies like Google or WhatsApp and multiple stakeholders to promote digital literacy and awareness among the public.

Mohan shares that they have already tested an algorithm that Meta (formerly Facebook) is releasing on WhatsApp, which can detect AI-generated media. As part of assurances from tech giants such as Google, Meta, and OpenAI to collaborate with the Indian government to safeguard access to accurate information, Meta has partnered with India’s Misinformation Combat Alliance (MCA) to operate a dedicated fact-checking helpline on WhatsApp. The helpline includes support in Telugu.

Furthermore, to counter misinformation, particularly deep fakes, Mohan suggests a manual approach to try detecting deep fake audio: paying close attention to background noises. While these technologies are adept at mimicking human voices, they often struggle to seamlessly integrate natural background noises like hum of air conditioning, background traffic, or the faint rustle of papers, he explains, adding that deep fake audio might either completely lack these sounds or present them in an inconsistent manner, where the background noise might suddenly appear or disappear or vary unnaturally throughout the recording.

“For an individual’s own safety and sanity, it is advisable to become one’s own watchdog. Digital literacy serves as a defence mechanism, providing better protection against the risks associated with misinformation and data privacy breaches, which are increasingly prevalent,” he emphasises.

But the entirety of these AI tools must not be looked down upon. In fact, the same tools that are used to spread misinformation can be effectively repurposed to enhance civic participation and democratic processes. By keeping AI human-centred in the political sphere, it can truly serve the electorate, he opines.

He further mentions that AI-enabled real-time speech translation connects people across different languages more effectively. Similarly, the use of micro-targeting in political campaigns, when applied ethically, can help address specific issues relevant to voters.

“Imagine an elderly farmer or a distressed beneficiary having the chance to connect directly with an elected official or their Chief Minister on a one-on-one call that addresses them by name and provides assistance! This makes people feel heard and represented. The assurance and personal connection fostered by this kind of interaction can be more impactful than any public meeting. The key is in ensuring AI is used ethically and responsibly to avoid issues like privacy invasion or misinformation.”

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com