Centre moves to curb 'obscene' online content: New IT rules mandate deepfake labelling, 24-hour takedowns

A major pillar of the amendments is the government’s growing focus on synthetic or AI-generated media.
The ministry has proposed fresh amendments to address the growing challenges posed by deepfakes and AI-generated content.
The ministry has proposed fresh amendments to address the growing challenges posed by deepfakes and AI-generated content. Photo | IANS
Updated on
3 min read

CHENNAI: The Union government is preparing a fresh round of amendments to India’s Information Technology Rules with the aim of tightening oversight of online platforms and curbing what it describes as the growing spread of “obscene” and harmful digital content. The proposal, drafted by the Ministry of Electronics and Information Technology, seeks to expand the due-diligence responsibilities of social media intermediaries and large platforms, while introducing clearer obligations to detect, label and remove manipulated or synthetic media.

At the heart of the plan is a formal definition of “obscene digital content,” a term officials say is needed to give legal clarity around material that includes non-consensual intimate imagery, explicit sexual content, or other content deemed to violate decency standards. Platforms would be required to act promptly when such material is flagged, either by users or by government authorities, and the expected response time is likely to be compressed significantly. According to officials familiar with the process, the new compliance window may require certain types of objectionable or explicit content to be taken down within about 24 hours of receiving a valid complaint.

A major pillar of the amendments is the government’s growing focus on synthetic or AI-generated media. The draft rules propose that creators be required to declare when a piece of content has been produced using artificial intelligence or any generative tool. Platforms, in turn, would need to deploy mechanisms to identify manipulated content and apply a clear label indicating that an image, video or audio clip is synthetic. This is intended to curb the misuse of deepfakes for harassment, impersonation, misinformation and political manipulation—concerns that have intensified as AI tools have become widely accessible.

The government argues that the changes are meant to strengthen transparency and accountability in an online ecosystem that now generates millions of posts every day. Officials say the new framework will reinforce the legal requirement that platforms must act once they have “actual knowledge” of illegal content, especially when notified through a court order or by an authorised government agency. In parallel, ministries have recently issued multiple blocking orders against small OTT platforms accused of hosting pornographic material, signalling the government’s intent to enforce stricter standards across the broader digital space.

Reactions to the proposed amendments have been divided. Digital-rights groups and several legal experts warn that broad or undefined terms such as “obscene” may expose legitimate artistic, journalistic or political content to censorship. They argue that without clear procedural safeguards, the rules could grant authorities wide discretion over what remains online, creating a chilling effect that discourages creators from taking creative or critical risks. These groups have called for a longer public consultation process and tighter checks on executive power.

Industry reactions have been more mixed. Some segments of the entertainment, advertising and creator economy welcome stronger measures to crack down on unauthorised explicit content and undisclosed synthetic imagery, arguing that well-defined rules could reduce reputational risks and support cleaner digital spaces. Others, especially smaller platforms and independent creators, worry the compliance burden will be heavy, requiring investment in moderation teams, verification systems and faster grievance response mechanisms in order to avoid penalties or blocking orders.

The amendments also highlight several operational challenges. Effective detection of AI-generated media remains technically difficult, and even the best automated tools can misidentify legitimate satire or creative edits as harmful, says social media analyst and tech-expert. At the same time, malicious creators can easily mislabel or disguise synthetic content, weakening the rule’s deterrence. International hosting adds another layer of complexity, since content available on servers abroad can still be accessed in India unless platforms proactively restrict or remove it, he added.

In political terms, the government is likely to frame the new rules as essential for safeguarding women, children and the broader public from exploitation, harassment and misinformation. Critics, however, are expected to challenge the constitutionality of the rules in court, arguing that they may infringe on free-speech protections if applied too broadly or without adequate oversight. For social media companies and digital publishers, the outcome will be a more demanding compliance environment, with higher moderation costs and greater legal exposure.

The next few months will determine how transformative these changes turn out to be. Much will depend on how precisely the government defines “obscene digital content,” how enforcement protocols are drafted, and whether courts uphold the rules if challenged. Platform responses will also shape the landscape—some global companies may comply readily, while others may push back or seek legal clarity. For creators, users and digital businesses alike, the revised rules mark a significant step in India’s evolving effort to regulate online speech, technology and safety in a rapidly changing digital era.

Related Stories

No stories found.
The New Indian Express
www.newindianexpress.com