Xplore

Deepfakes - A growing concern on misuse of personality rights

A real and credible threat is deepfakes, which use generative AI and deep learning to allow for the manipulation of, inter alia, images and videos.

Bharadwaj Jaishankar

Artificial intelligence (AI), especially generative AI, is transforming our interactions with the physical and digital world. There is no doubt that AI has had, and will continue to have, a meaningful impact across industries. That said, and as has been well documented already, AI brings significant legal and ethical challenges as well. 

A real and credible threat is deepfakes, which use generative AI and deep learning to allow for the manipulation of, inter alia, images and videos. In the context of intellectual property rights in India, deepfakes have had a grave negative impact on many artists and celebrities, leading to violations of their privacy and personality rights. 

It is now common that a large amount of content online, especially on social media platforms, is created using AI. Deepfakes often target prominent public figures and celebrities due to the easy availability of their persona, images and videos.

In the absence of a specific legislation to deal with personality rights, precedents have been established for protecting personality rights under the Constitution of India and intellectual property laws, particularly through actions like passing off under trademark regime in India. For example, Indian celebrities such as Amitabh Bachchan, Anil Kapoor, Daler Mehndi, and Sourav Ganguly have successfully taken legal action against the unauthorised use of their names, images, voices, and likenesses by infringers, where such unlawful actions have extended to misuse online and offline.

However, enforcing personality rights against AI or AI-generated works poses a greater challenge, as action may sometimes need to be taken against unidentified perpetrators. 

The process of content creation with generative AI indeed presents several legal challenges. Attributing and pinpointing liability to an individual user, software developer, or input provider is difficult due to the partially autonomous nature of AI, and it even extends to difficulties in tracing the human behind such illegal activities. 

In India, to combat internet-based offences, including those offences that result in a violation of privacy, the Information Technology Act, 2000 (IT Act) has been widely used. Deepfake content that capitalises on the likeness of a celebrity, without consent, is also among the content that is taken down under the provisions of the IT Act or the Rules, usually on the basis of an order passed by a court.

Though India’s Digital Personal Data Protection Act, 2023 caters to digital forms of data, it does not exclusively discuss offences pertaining to deepfake technology and AI. Jurisdictions across the world are facing similar challenges as exemplified by cases such as the infamous Scarlett Johansson v. OpenAI in the US. 

India on the other hand, has issued advisories to intermediaries pertaining to the “growing concerns around misinformation powered by AI – Deepfakes” and is also working on the Digital India Act. However, the scope of havoc that deepfakes can create continues to be a source of distress. In this light, stricter regulations are the need of the hour and time will reveal if the current and upcoming legal framework is adequate to meet the complexities being posed by deepfakes.

Iran warns US troops, Israel will be targeted if America strikes over protests; death toll hits 538

Shops, houses, mosque allegedly set on fire in Tripura after altercation over collecting funds for local temple

US President Donald Trump tells Cuba to 'make a deal, before it is too late'

India beat New Zealand by four wickets in first ODI

CBFC cuts must guide, not dictate content

SCROLL FOR NEXT