Fake news shared over two million times on social media during Lok Sabha polls

Logically found that there were 1,33,167 unreliable stories during the elections out of which 33,000 stories reported in the Indian media were fake.
The startup uses human fact-checkers as well as artificial intelligence and machine learning to detect specific biases in stories.
The startup uses human fact-checkers as well as artificial intelligence and machine learning to detect specific biases in stories.

BENGALURU: About 50,000 fake news stories were published during the recent Lok Sabha elections and shared 2 million times, according to a study conducted by fact-checking startup Logically. The firm, which has bases in Mysuru in Karnataka and also in the UK, was even able to track down some fake stories all the way to Chinese and Pakistani IP addresses.

“Hateful articles were shared more than three lakh times and 15 lakh shares were connected to extremely biased stories, likely to be reflecting the sharer’s personal opinions on topics. As a result, readers could be entering filter bubbles and echo chambers on their own. This further highlights the unique traits of mis/disinformation in India. The most significant platforms for problematic content have been closed networks such as WhatsApp and private Facebook groups,” the case study titled ‘Misinformation and the Indian Election’, revealed. “Of the 9,44,486 articles analysed, 14.1% were found to be unreliable and 25% were fake,” it added.

Logically found that there were 1,33,167 unreliable stories during the elections out of which 33,000 stories reported in the Indian media were fake.

“We started closely monitoring the 2019 General Elections from March, right before the campaigning started. We concluded our work in May 2019, and started to analyse it for important learnings,” Lyric Jain, Founder CEO and Promoter of Logically told TNIE.

The startup uses human fact-checkers as well as artificial intelligence and machine learning to detect specific biases in stories.

“We believe that artificial intelligence should supplement human intelligence, not supplant it. Our technology works alongside our human annotators and fact-checkers to evaluate news articles. We analyse content, network and metadata to evaluate an article’s accuracy. Our AI won’t simply brand an article as biased; it will identify the specific biases to form this conclusion. For example, if we notice that a particular news article contains racial slurs or aggressive and derogatory language, we can deduce that the article has less credibility of being objective and factual news than an article that does not use such language,” Jain explained.

“The key words which were extremely popular were ‘EVMBan’, ‘EVMSarkar’ (and) additional phrases referring to Hindus and Muslims voting for certain parties, etc.  The most popular days were election days and the preceding days, during which the most popular type of content would be disinformation about polling booths being moved and dates being changed — a possible attempt at voter suppression,” Jain pointed out.

The study stresses that the effects of misinformation in India should not be underestimated. “One needs to simply look at the widely-reported spate of vigilante mob killings that occurred over an 18-month period between mid-2018 and the start of 2019 to see the real effects of rumour and falsehood. Nicknamed the ‘WhatsApp lynchings’, often the killings were a direct result of rumours about child abductions which spread over the messaging platform to rural communities. The victims were mostly strangers, passing through communities and not known to the locals who – spurred on by false rumours – carried out the attacks,” the study says.

Other instances of mob violence were related to cow vigilantism directed towards Muslims and Dalits, an excerpt of the study stated. During the elections, Logically could identify when a story is being spread by bots rather than humans. For example, a social media account that engages with and shares a story almost instantaneously is likely to be run by a bot, since human reaction would be far slower. If a story is being spread by bots, then there is a high probability that its content is inaccurate and being maliciously propagated.

WhatsApp misused

“There is ample evidence that during the election, all sides (political parties) were using WhatsApp to spread highly divisive, manipulated, or completely false information,” the study found.

“While mainstream publishers tended to be factually accurate, political biases in coverage were evident and in the cases of a handful of publishers, were extreme. A study by the Oxford Internet Institute (OII)  discovered that more than a quarter of content shared by the BJP as well as one-fifth of content shared by Congress was ‘junk news’. In addition, a sample of images shared on WhatsApp by the parties were deemed ‘divisive and conspiratorial,” it added.

What can viewers, voters do?

“Not just during elections, but all the time one can help to curb the menace of fake news by simply pausing before sharing any piece of news and being whistle-blowers. I believe it is important to build resilience in the WhatsApp community and vaccinate them against fake news. This can happen only if each user of any social media platform, pauses before sharing any piece of information that looks questionable and exposes the content to fact-checkers, report it to social media platforms or just not share the piece of questionable information,” Jain advised.

“Anything that appeals to your biases is usually detrimental. As a result, it is necessary to have an open mind all the time. We want readers to be sceptical of the content, specially from fringe publications.,” he added.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com