Deep concern?

Deepfake and AI-generated content are hot topics right now after a slew of fake videos of popular woman actors surfaced online. Here, TNIE gives an overview of potential dangers of these new tech tool
Representational image | Express Illustraions
Representational image | Express Illustraions

KOCHI: Have you seen the AI doctored images of Donald Trump? What about that Mark Zuckerbergs video, wherein he elaborates on how he has total control of billions of people’s  data? Ukrainian President Volodymyr Zelenskyy asking his soldiers to lay down their arms and surrender the fight against Russia?

If you have, then you’ve already come across deepfake videos, some of which reveal the tech’s frightening power and possibilities. By way of artificial intelligence called deep learning, people are now able to generate a completely new video, image, or audio and portray a scenario that didn’t actually occur.

Needless to say, neither  Zuckerberg, nor Zelenskyy said those things, but it appeared as if they did, exhibiting the dangers that these novel technologies are capable of. Deepfake has its origins in 2014 when researchers created realistic-looking faces by using generative adversarial network. Three years later, deepfake content grew in popularity after an anonymous Reddit user (deepfake) shared pornographic videos that used open-source face-swapping technology.

But why is deepfake such a hot topic in India? What prompted PM Narendra Modi to term it a ‘new crisis’? Recently, a video made rounds on the internet: of actor Rashmika Mandanna, wearing a black tank top and shorts, stepping out of an elevator and posing for the paparazzi. It was a deepfake. The actual video was of Zara Patel, an influencer. A concerned Rashmika had then pointed out the dangers of such fraudulent videos.

Even before news of the incident died down, two more fake videos emerged — of actors Kajol and Katrina Kaif. Kajol was morphed into the body of a woman who, in the video, was seen trying on a dress. Katrina’s towel fight scene in the teaser of Tiger 3 was distorted to make it appear as if the actor was wearing a bikini.

Following an uproar from the public, the Union Ministry of Electronics and IT directed social media platforms to remove violating deepfake content within 36 hours after receiving a court or government order to that effect.

In addition to this, when a user or someone on behalf of the user raises a grievance, it has to be resolved in 72 hours, and if the content has any material that exposes the private area of the individual, show them in full or partial nudity or any sexual act or conduct, the content has to be removed in 24 hours.
“The practice of morphing images has been in the discussion for years now. Today, with the advent of deepfake AI, you don’t need access to any advanced tool or have much knowledge about the software because the applications are already designed to give you the results you seek. With deepfake, more than images, it is the creation of fake videos that is now rampant,” says Nandakishore Harikumar, founder and CEO of Technisanct, a cybersecurity firm based in Ernakulam.

According to Nandakishore, as far as India is concerned, the country doesn’t foresee the potential cons of a new technology or even regulate them. “We lack a proper regulatory framework. In the light of deepfake cases, only advisory-based regulations have been put forward. When the situation goes out of hand, we have a reactive approach. See, the hype of AI started in the year 2014-2015, but no solid actions were taken. From the government, be it the state or central we don’t have a mechanism to even counter misinformation and fakenews,” adds Nandakishore.

How are deepfakes made?
A face swap video can happen in a matter of a few steps. If you are taking two people, you run thousands of faceshots of them through an AI algorithm called an encoder. The algorithm then finds the similarities between the faces and reduces them to their common shared features, compressing the images. Now the compressed images are recovered using another AI algorithm called the decoder. The face swap happens when the decoder reconstructs the expressions and orientations of the face.
“Applications are already available. As of now, the deepfake videos and images are all amateur works. Artificial Intelligence training is an expensive affair. You’ll only get the best results if you train a maximum number of datas,” says Nandakishore.

It’s all fun and games until...
This year, a deepfake video circulated on social media featuring a scene from the movie ‘The Godfather,’ where the original cast members, Al Pacino, Alex Rocco and John Cazale, got replaced with Malayalam actors Mohanlal, Mammootty, and Fahadh Faasil. It quickly became a rave on the internet.
“I started recreating such videos just for pure entertainment purposes. At first, I was excited to see the response and traction the Godfather video was receiving.

Later, the attention became scary for me. I started getting calls from people, including those from the film industry, asking me to elaborate how I created it, etc. Then, I realised the sudden popularity for such content can prompt many to create fake videos and photos, and publish them as porn. Because of that video, I understood that our awareness on deepfake was close to nil. So, misuse can happen on a large extent,” says 26-year-old Tom Antony, who works as a freelance motion graphic designer in Kottayam.

According to recent findings, 96% of deepfake videos were pornographic in nature and 99% of the faces used were that of female celebrities. “With much content emerging on social media on deepfake, I tried to understand how to control the spread of such content by contacting some researchers online. One of the suggestions was that for each video uploaded, there should be a voting system by which the public can identify if it is fake or real. It would be difficult to make it possible on WhatsApp though,” adds Tom.

From online frauds to deepfake
At a time when people fall for fraudulent practices like part-time job scams, crypto currencies, and honey-traps, deepfake is a new entrant among Malayalis. Recently, P S Radhakrishnan, a resident of Kozhikode, lost `40,000 after cybercriminals used deepfake AI technology and presented as his former colleague in a WhatsApp video call and sought money for his sister’s surgery. The call lasted 25 seconds, and the person spoke in English, and only his face was visible, the money was then transferred.
Superintendent of Police Harisankar of the cyber operations wing states that deepfake is a rising concern. Aside from the Kozhikode case, nothing new has been reported so far.

“Malayalis often get trapped  when seeking easy money making schemes. Since people are already falling for such less-sophisticated traps, criminals have likely not felt the need to go behind deepfake. However, in the Kozhikode case, the culprit used the already available open-source tool and presented himself in 2D format,” the officer says.

Apart from online fraud, deepfake AI can affect other spheres as well, cites Harishankar. “Religious sentiments could be hurt, communal riots can be instigated, fake video evidence can be presented in court, even police evidence can be deemed to be deepfake by the defence party. Moreover, this can be a major crisis during election time, where anyone can put new words in a politician’s mouth, and the video can invite controversies. By the time it’s identified, the damage would already be done,” says Harisankar.

“Audio cloning is another looming crisis. Creative content can also get manipulated, including changing a film script or forming a clause or agreement between two parties,” says Nandakishore. To tackle the issue, Harishankar states that the GOI is creating deepfake detector tools which can be used by every state’s law and enforcement bodies. “It is expected to take shape by the end of this month. Here, the tool will issue a confidence score. If the content’s score is below 20 per cent, it is identified as deepfake, and videos above 80 per cent can be considered original,” says Harishankar.

What the law says
Lawyer Jiyas Jamal, founder of Cyber Suraksha Foundation in Kochi states that the present IT Act does not have any provision for deepfake or artificial intelligence. “However, the malpractice that gets committed with the usage of artificial intelligence can be booked under the 66C of IT Act, where a person make use of electronic signatures or any other unique identification feature. IPC 465 deals with forgery and IPC 469 is about committing a forgery that affects our reputation, even if there’s no loss of money, case can be charged as it tests the identity of a person,” he says. 

How to spot a deepfake? 

  • Bad lip synching
  • Eyes might not blink normally
  • Look for unnatural facial expressions
  • Blurry surroundings
  • Shift in voice and in between robotic speech pattern
  • Flickering around the edges of the face
  • Look for details like hair, fingers, they are hard for deepfakes to render well.

Related Stories

No stories found.
The New Indian Express
www.newindianexpress.com