

On March 25, a Los Angeles jury delivered what many experts are calling the most important legal verdict in the history of the internet. The court found Meta and YouTube liable on all counts in a landmark case accusing the tech behemoths of deliberately engineering addiction in a young woman and inflicting grievous harm upon her mental health. The platforms were found negligent in their design architecture, adjudged to have known their systems were pernicious, and to have caused substantial, documented injury to the plaintiff.
The case centred on a now 20-year-old woman identified as KGM in court documents, who began using YouTube at six and Instagram at nine. By elementary school's end she had posted 284 videos online. She told the court she had ceased engaging with her family — deracinated from her social life — spending every waking hour on social media, suffering acute anxiety and depression from the age of 10. The jury awarded $3 million in compensatory damages, apportioning 70% liability to Meta and 30% to Google, with further punitive damages of $3 million split between them after nearly 44 hours of deliberations.
The dollar figures are paltry for corporations of this magnitude. What is significant though is that a legal theory was validated. For the first time, a jury upheld the "addictive-by-design" doctrine — targeting the architecture of the apps themselves, such as infinite scroll and variable reward algorithms, rather than user-generated content. In doing so, the court circumvented the decades-old legal aegis of Section 230 of US law, which had long shielded platforms from liability and opened the door for a class action suit. "You add it all up and it could be hundreds of billions of dollars," said Jonathan Haidt, social psychologist and author of The Anxious Generation. Many observers are already calling this the Big Tech's "big tobacco moment" — the implacable reckoning that arrives when suppressed knowledge of harm can no longer be contained.
India's billion-scrollers
While the verdict unfolded in Los Angeles, its ramifications resonate acutely in countries like India, where the conditions for algorithmic harm exist at a scale American lawyers can barely fathom. India's Economic Survey 2025–26 flagged the precipitous rise of digital addiction as a significant public health threat.
With roughly five hours of daily usage across hundreds of millions of users, total smartphone time in India runs into trillions of hours annually. The Annual Status of Education Report 2024 reveals only 57% of children aged 14–16 use phones for education, while 76% use them for social media, with youth aged 15–24 identified as most vulnerable to addiction and gaming disorders.
Government data shows internet connections in India leapt from 25 crore in March 2014 to 95.4 crore in March 2024 — a 279% increase in a single decade — placing algorithmically-driven feeds into the hands of an entire generation simultaneously. Research documents compulsive behaviours such as midnight scrolling — a frenetic, restless ritual corroding mental and physical health — alongside a preference for ephemeral online contact over substantive relationships. The sleep toll is measurable: a large-scale study of 45,000 Norwegian university students found that one hour of screen time before sleep elevated the risk of insomnia by 60%, while a 2025 survey by sleep researchers found that 84% of young adults use at least one social media platform daily, with pre-bed scrolling consistently linked to delayed sleep onset and reduced sleep duration. In a society where academic pressure on the youngsters is already mounting, this kind of behaviour exacerbates anxiety and corrodes their mental health.
Engineered addiction: What insiders revealed
Long before courtrooms took up the question, insiders were whistleblowing about the dangers. In the 2020 Netflix documentary The Social Dilemma, former Google design ethicist Tristan Harris and ex-Facebook executive Tim Kendall described how platforms are optimised for engagement and retention. Harris compared the compulsion to a slot machine: users check their phones with obsessive repetition, anticipating the dopamine hike of a notification. These misgivings were corroborated by the 2021 "Facebook Files" — internal documents leaked by Frances Haugen revealing that the company's own research had identified pernicious effects among teenagers. Instagram worsened body image issues for one in three teen girls; 13.5% of British teen girls reported more frequent suicidal thoughts after joining the platform. These findings were withheld from lawmakers who explicitly requested them.
When Haugen testified before the US Senate in October 2021, she was clear: "Facebook's products harm children, stoke division, weaken our democracy. The company's leadership knows ways to make Facebook and Instagram safer and won't make the necessary changes because they have put their immense profits before people." Internal communications presented at the Los Angeles trial further revealed that senior Meta executives had discussed capturing users as young as tweens, noting that 11-year-olds were four times as likely to return to Instagram as users of competing apps — a wanton disregard for the welfare of children that the jury ultimately found iniquitous.
Algorithm pushes teens to the edge
No case more vividly illustrates the lethal potential of exploitative digital design than the Blue Whale Challenge — an online game that propagated through social media targeting teenagers with a 50-day series of escalating self-harm directives, culminating in an injunction to take their own lives. Throughout 2017, Indian media documented multifarious cases of child suicide and self-harm linked to the challenge. In Mumbai, a 14-year-old student jumped from the fifth floor of his building. In Kerala, two suicides were closely linked to the game within the same year. A Jodhpur teenager attempted suicide twice in a week, and had carved the shape of a whale into her arm. A clinical case report from Gauhati Medical College documented a student who completed 40 of the 50 tasks, including self-harm, before psychiatric intervention.
Researchers subsequently debated how many Indian suicides were definitively linked to the challenge versus those produced by media contagion. A systematic review traced approximately 50 incidents, of which 21 were completed suicides. Whether or not every death can be verified as directly linked, the underlying truth is ineluctable: algorithmically-amplified content targeting the vulnerable was the delivery mechanism for this harm.
Preventive governance in the Nordics
While American courts grapple with accountability after the fact, the Nordic countries have built legal structures that prioritise childhood over engagement metrics. Norway has proposed prohibiting social media for children under 15. Sweden's public health agency recommends no screen time before age two, no more than one hour daily for ages 2–5, and two to three hours for older cohorts. Denmark has seen widespread municipal implementation of school smartphone bans following national guidance to prioritise "analogue" learning environment. In 2024, the Nordic Council of Ministers issued a joint statement expressing grave concern about the "pernicious effects" of digital platforms on children's well-being, pledging coordinated policy action. The results of this cultural shift are tangible: Swedish retailers like Elgiganten reported that sales of "dumb phones" tripled between 2022 and 2024 as parents sought to decouple children from addictive algorithms.
Beyond the Nordics, Spain approved a draft law in 2024 to raise the minimum age for data consent and social media registration from 14 to 16. Australia set a global precedent in November 2024 by passing the Online Safety Amendment (Social Media Minimum Age) Act, which prohibits accounts for those under 16 and imposes fines of up to AU$49.5 million for non-compliant platforms.
India's slow regulatory response
India's own body of problems is multifaceted and documented. The nation maintains one of the world's highest suicide rates for youth aged 15–29, with recent studies identifying this demographic as the most vulnerable to psychological distress.
A 2024 study of college students in Tamil Nadu revealed that over 26% of adolescents engaged in excessive social media use, establishing a direct link between compulsive consumption and symptoms of anxiety and low self-esteem. India's Economic Survey 2025–26 explicitly notes that platforms design algorithms targeting the 15–24 age group’s psychological vulnerabilities through features like auto-play and infinite scroll.
Cyberbullying has produced documented tragedies across the subcontinent: girls have faced targeted harassment campaigns culminating in self-harm, while young men, radicalised through algorithmically curated content, have developed distorted worldviews. Furthermore, clinical settings in urban India have recorded a surge in eating disorders directly linked to the visual culture of platforms like Instagram.
India has not stood entirely still. The Digital Personal Data Protection (DPDP) Act, 2023 classifies anyone under 18 as a child and mandates "verifiable parental consent" before any data processing occurs, imposing penalties that can reach hundreds of crores. Enforcement, however, remains fraught; age verification is dilatory to police, and digital workarounds remain rampant. Crucially, while the DPDP Act protects data privacy, it does not yet regulate the "addictive architecture" of digital platforms—such as algorithmic feeds and variable reward notifications—leaving the broader question of exploitative design unaddressed.
Technology is not the problem — uninformed choice is
The Los Angeles verdict does not ask us to repudiate technology. It asks us to revisit the proposition that corporations may design products to exploit human psychology — especially the developing psychology of children — without consequence, without disclosure, and without consent. The tobacco analogy holds because of one element: knowledge. Tobacco companies knew their products were harmful. They obfuscated that knowledge. They inveighed against regulation. The same reckoning is now beginning for social media.
The solution is not to ban technology but to enshrine informed choice as an unassailable right. Users — particularly young users and their parents— must know how these systems operate, what psychological mechanisms they exploit, and what the documented risks are. India, with nearly 97 crore internet users, a third of them young people navigating an inevitable digital landscape, must choose whether to lead or to follow. The algorithm does not wait. Neither should the law.