Effective pill for online ills

The increasing harm that online platforms are inflicting on young children needs a conversation beyond state bans. Focus must be trained on platform design
Banning technology is not the solution, but the status quo also needs to change.
Banning technology is not the solution, but the status quo also needs to change.(Photo | Express)
Updated on
3 min read

Over the last few months, several states have sent a clear message long overdue: in today’s age, child safety can no longer be treated as an afterthought. Calls for stronger restrictions on minors’ social media use are already being voiced citing mental health, academic performance and overall well-being.

From Karnataka and Andhra Pradesh to Goa and Punjab, state governments have begun recognising the seriousness of challenges children face online, and the fact that it is no longer a scattered concern but a national problem that demands urgent attention. The concern is not remote to my state, Odisha, either. This is not a sudden overreaction or a passing political trend, but rather the consequence of years of mounting harm and toxicity, and the ineffectiveness of platform safeguards.

Even this year’s Economic Survey framed social media addiction as a health challenge in which compulsive use among young Indians is leading to anxiety, depression, low self-esteem, sleep disturbances, reduced concentration and poorer academic performance.

Among the things children are exposed to on social media platforms are sexualised material, influencer-driven content, unrealistic lifestyles and manipulative trends. Added to this is the rising threat of bullying, predatory contact and generative artificial intelligence-enabled harms that overwhelm young users with misleading and often harmful material. It is for this reason that state-level conversations on social media regulation have largely framed the public health lens. While platforms are making efforts to improve a child’s experience on social media through measures such as teen accounts and parental tools, such efforts are clearly not enough.

As parliamentarians, we should read this carefully. When state governments with varied political dispensations move in the same direction, it signals that a public concern has crossed a threshold and policymakers are losing patience. The question, therefore, is not merely whether banning social media for children is a sustainable approach. Banning technology is not the solution, but the status quo also needs to change.

The bigger question is whether the existing platform-led model of child safety online remains defensible at all. Are we starved for workable solutions? While platforms have had years to remodel these spaces meaningfully safer for users, the harms have only exacerbated.

Now, with the mainstreaming of gen-AI, the culture of platforms seems to be becoming even more exploitative and difficult to govern. In such a situation, it is hardly surprising that governments reach for bans and regulatory restrictions, which could be considered the easiest tools available when no one has offered a better alternative.

The next phase of this conversation must be about platform accountability in its most achievable form. For that, we first need to acknowledge that social media platforms function as systems of public consequence. It would not be an understatement to say that they actively shape what we see, how we think and what kinds of social behaviour are normalised.

Second, since the question of harm is no longer restricted to content alone and has become more about design and culture, our accountability mechanisms must keep that in mind. For that, platforms accessed by children must move away from design architectures that involve meaningful friction, including age-appropriate time limits, safer late-night defaults, reduced virality features for younger users, stronger break prompts and greater restraint around design choices that are built to prolong use.

On top, the focus should shift towards a preventive model of moderation focused on improving the quality of the environment itself. This means stronger human moderation for high-risk youth-facing spaces, faster escalation channels, greater visibility into how harmful trends spread. Platforms should also take more responsibility on platforms to disrupt repeated cycles of manipulation, pile-ons and algorithmically-amplified mischief before they harden into culture.

Just as importantly, platforms must also learn to slow down, especially in times of AI. They can’t continue to roll out high-impact features, generative tools and engagement mechanisms into youth-heavy environments as though society must absorb the consequences later.

Significant product changes should go through sandboxes, testing environments and external review before large-scale deployment. Parents, young users, educators, child-rights experts and mental-health professionals should be consulted in product design and rollout.

It may no longer be enough to rely on scattered rule-making and platform-specific responses to tackle what we are dealing with today. What is now needed is a more specialised digital-safety regulator capable of monitoring risks to minors, demanding meaningful disclosures and reviewing high-risk design features. Such an approach should not focus only on content takedowns, but on the broader conditions that make harm more likely.

The role of the regulator would be particularly crucial in India, where the scale of platform use is enormous and enforcement remains uneven. Moreover, at a time when different states are beginning to respond in different ways, there is a need for an institutional mechanism that can bring coherence and ensure effective enforcement of safety obligations.

Sasmit Patra | MP, Rajya Sabha and member, Standing Committee on Communications and IT

(Views are personal)

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com