China plans strict AI curbs to protect children, block self-harm and gambling content

By focusing on risks involving children, self-harm, violence and gambling, the move signals that AI’s social impact will be regulated as tightly as its technology, potentially setting a global precedent for governing advanced AI systems, says a digital wellness expert, who spoke to TNIE.
The draft rules, published on Saturday by the Cyberspace Administration of China, reflect growing unease within China about the psychological and social impact of AI systems that are designed to converse, empathise or provide 'advice'.
The draft rules, published on Saturday by the Cyberspace Administration of China, reflect growing unease within China about the psychological and social impact of AI systems that are designed to converse, empathise or provide 'advice'. File photo/ANI
Updated on
3 min read

CHENNAI: China has proposed a fresh set of stringent rules to regulate artificial intelligence, signalling a tougher approach to the rapidly expanding use of AI chatbots and other systems that engage users in human-like interactions. The draft regulations, released by Chinese authorities, place strong emphasis on protecting children and preventing AI tools from generating content that could lead to self-harm, violence or other harmful behaviour.

Under the proposed framework, AI developers and service providers will be required to put in place safeguards to ensure their models do not offer advice or responses that encourage suicide, self-injury or violent acts. The rules also seek to curb the social risks associated with AI-generated content by banning outputs that promote gambling, a sector that Chinese regulators have long viewed as socially disruptive and illegal in most forms, reports say.

The draft rules, published on Saturday by the Cyberspace Administration of China (CAC), and reviewed by TNIE, reflect growing unease within China about the psychological and social impact of AI systems that are designed to converse, empathise or provide advice. Regulators appear particularly concerned about scenarios in which users turn to AI for emotional support, mental health guidance or personal decision-making, areas where inappropriate responses could have serious consequences. By explicitly barring AI from offering harmful advice, the government is drawing a clear line around the role such systems should play in sensitive human interactions.

A key focus of the proposals is the protection of minors. Developers will be expected to design AI systems that are suitable for children, including stronger content filters, age-appropriate interactions and limits on how minors can access emotionally engaging or companion-style AI services. Authorities have highlighted concerns that increasingly realistic chatbots could influence children’s behaviour, emotional development and decision-making if left unchecked.

From an industry perspective, the proposals are likely to increase compliance costs and technical complexity for AI companies operating in China. Developers may need to invest more heavily in content moderation, behavioural monitoring and safety testing to ensure their models meet the new standards. Smaller firms and startups could find it challenging to keep pace with these requirements, potentially favouring larger players with deeper resources and more mature compliance systems.

At the same time, the move underscores China’s intent to shape the evolution of AI in line with broader social and policy priorities, rather than allowing market forces alone to dictate how the technology is deployed. The proposed rules fit into a wider pattern of regulation that seeks to balance technological innovation with strict oversight of content, data use and social impact.

"Overall, China’s proposed AI rules mark a significant step towards more comprehensive oversight of how intelligent systems interact with users. By targeting risks related to children, self-harm, violence and gambling, the authorities are signalling that the social consequences of AI will be regulated as closely as its technological capabilities, potentially setting a precedent for how governments worldwide approach the governance of advanced AI systems," says Rajiv I Neeransh, a digital wellness expert and child psychologist.

If implemented, the regulations could have implications beyond China’s borders. Global AI companies offering services in the Chinese market may need to adjust their products or create separate versions that comply with local rules. The emphasis on child protection, mental health risks and harmful content also mirrors debates taking place in other major economies, suggesting that tighter controls on AI behaviour may become a defining feature of the next phase of global AI regulation.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com