There is a cave in the Ardeche gorge—Chauvet—of southern France that most people will never see. The French government has kept it sealed since its discovery in 1994, admitting only a handful of researchers per year, concerned that human breath alone could destroy what is inside. The cave walls are covered in paintings of lions, rhinoceroses, horses, and aurochs, showing a sureness of line and a sensitivity to light and shadow that would not look out of place in a contemporary exhibition in MOMA. The artists who made them lived approximately 36,000 years ago. They had, in every measurable sense, the same brain you are using to read these words. But the difference is that they did not have tablets and cell phones to hinder growth or change, which would have seriously affected the subsequent generations.
This year, Andhra Pradesh and Karnataka began drafting some of the most aggressive digital-age child protection measures in the country. Andhra Pradesh has proposed a ban on social media use for children under 13, alongside stricter, graded access for teenagers. The plan has exposure limits for users aged 13-16. Karnataka, meanwhile, has proposed restrictions for users under 16 and even an AI-focused regulatory framework to monitor harmful content and deepfakes. Madhu Bangarappa, Minister of Primary & Secondary Education and Sakala of Karnataka, says, “Children are born as Vishwa Manava—open, expansive, and inherently capable of becoming their best selves—but as they grow, unchecked influences can narrow that potential. Today, social media, while informative in parts, often distorts, misleads, and fosters negativity, especially among impressionable minds.” Prolonged use of digital media leads to lower attention spans and more impulsiveness.
- This decreases the child’s mind’s ability to switch between thinking about two different concepts or multiple concepts simultaneously
- Results in sedentary behaviour leading to obesity
- Blue light from phone and computer screens throws sleep cycles into disarray affecting REM sleep
- Minimal face-to-face human interaction encourages social isolation and anxiety
- Younger children get addicted to digital use thanks to a “reward system” similar to gambling
Dr Sanjiv Nichani, OBE, founder, Healing Little Hearts Global Foundation, senior paediatrician, Leicester Children’s Hospital, says, “After four decades in paediatrics, I have witnessed a profound and alarming shift—the rise of what I call the ‘Screendemic’: a public health crisis driven by excessive smartphone and social media use, where children are growing up in technology-controlled environments, facing escalating mental health disorders, attention fragmentation, and developmental delays.” India’s Chief Economic Advisor has openly called for age-based limits on social media to counter addiction and cognitive harm among children. According to Dr Prof. Vishal US Rao, dean and professor, HCG, and member of the Consultative Group to Principal Scientific Advisor to PM, Government of India, “A blanket social media ban for under-16s is unlikely to work; instead, we need evidence-based regulation—chronological feeds, limits on addictive design, no targeted ads for minors, digital literacy education, and routine screen-time audits—so we reduce harm while building resilience and protecting sleep and mental health.” The brain, especially the developing brain is now exposed to forces evolution never prepared it for. The ancient mind struggled with distraction, while the modern mind is engineered for it. And yet, even as governments attempt to regulate the external world, the internal question remains unresolved. What happens when thinking itself becomes optional?
The brain, once the final frontier of privacy and identity, is becoming something else: an interface, a node in a larger system of intelligence that extends beyond the individual, especially children. Dr Manoj Sharma, head of SHUT (Service for Healthy Use of Technology) Clinic, NIMHANS, says, “Excessive gadget and social media use is driven by FOMO, fuelling a constant need for validation and instant gratification, which leads to fatigue and increases vulnerability to cyber risks.” Starting with the launch of the iPhone in 2007, the spread and use of touch devices became predominant, leading to the identification of those born from 2007 onwards as the Touch Generation, or the Screen Generation. Emerging studies report a variety of health hazards in preschoolers (2-5 years old) and school-aged children (6-12 years old)—body image and sleep issues, anxiety, depression, and poor academic performances. The biggest users are children aged 2 to 12 years; their already increasing trend of sedentary behaviour worsens the situation. Urban kids experience higher anxiety levels, likely tied to social media and online comparison, while rural kids suffer more eye strain from longer TV sessions without breaks.
After four decades in paediatrics, I have witnessed a profound and alarming shift—the rise of what I call the ‘Screendemic’: a public health crisis driven by excessive smartphone and social media use, where children are growing up in technology-controlled environments, facing escalating mental health disorders, attention fragmentation, and developmental delays.Dr Sanjiv Nichani, OBE, founder, Healing Little Hearts Global Foundation, senior paediatrician, Leicester Children’s Hospital
Similar effects have been linked to the “hikikomori” condition—extreme social isolation during childhood. A study by the National Institute of Mental Health and Neurosciences (NIMHANS) in Bengaluru found a rising trend of anxiety and depression among Indian teens. Researchers suggest that one of the reasons could be extensive social media use. A 15-year-old boy developed an addiction to scrolling and gaming, clocking over 11 hours of screen time daily. This led to cardiovascular issues, a loss of motivation to study, and eventually failing to progress to the tenth standard. With both parents being busy working professionals, there was little time or support for consistent digital detoxification. He was admitted to a clinic, but struggled to break the habit. An American study concludes prolonged sitting time, or time spent in “non-exercise” activities, is associated with a higher risk of cardiovascular and metabolic diseases. Worse, childhood lifestyle choices affect health in adulthood. Recent WHO guidelines on young digital users recommend a reduction in screen time.
The question neuroscientists, cognitive psychologists, and philosophers are urgently trying to answer is not whether this landscape is changing the brain. It is. But are parents, researchers and governments paying attention to the changes?
Excessive gadget and social media use is driven by FOMO, fuelling a constant need for validation and instant gratification, which leads to fatigue and increases vulnerability to cyber risks.Dr Manoj Sharma, head of SHUT (Service for Healthy Use of Technology) Clinic, NIMHANS
In 2011, psychologist Betsy Sparrow at Columbia University, working with colleagues Daniel Wegner and Jenny Liu, published a revolutionary study in Science magazine. The study showed that when people are told that information will be available to them later via a computer, they are measurably less likely to remember the information itself but are significantly better at remembering where to find it. The brain had already begun treating the internet as a kind of external hard drive. The researchers called it the “Google Effect”. At the time, this seemed like an efficient adaptation. Why memorise what you can look up? But researchers who spent more time with the data are less sanguine. Memory is not merely a retrieval system. It is the underlying layer of thought. The connections between stored ideas can only form if the ideas are actually in your head, where they can brush up against each other. A 2024 meta-analysis published in Frontiers in Public Health confirmed that the Google Effect is real, robust, and growing.
The human brain, that three-pound marvel of electrochemical architecture, took the better part of three million years to reach its current form, by surviving the ice ages, famines, volcanic winters, and the extinction of species on the planet. It invented language, mathematics, music, law, and the internet. It painted the Sistine Chapel ceiling and wrote King Lear and put a man on the moon. And now, it is navigating the world of smartphones, social media algorithms, and artificial intelligences sophisticated enough to write poetry, compose symphonies, and pass medical licensing exams.
Research published in a 2020 edition of Dialogues in Clinical Neuroscience noted that the average American smartphone user touches their screen approximately 2,176 times a day. This endless scrolling, the pinching, and swiping, the habitual thumb-travel toward the notification icon turned out to be inducing measurable changes in the cortical map of the fingertips. The phone was, in the most literal sense, reshaping its user’s brain. A two-year-old toddler was brought by frantic parents to a mental health facility. The child had been given a mobile phone during meal times and is now attuned to the swiping motion. He has begun to mimic this behaviour even in real life—when he sees faces or people, he tries to swipe their foreheads.
Social media deserves a moment of particular scrutiny, not because it is unique in its neurological effects, but because it may be the most deliberately engineered of them. Research reviewed in Frontiers in Cognition (2023) shows that excessive social media use reduces reward self-control and promotes what researchers call “compulsive checking behaviour”: the near-involuntary reaching for the phone at every moment of unstructured time. The result, as a 2023 University of Texas study documented, is that the mere presence of a smartphone on a desk—even face-down, even switched off—reduces available cognitive resources.
Children are born as Vishwa Manava—open, expansive, and inherently capable of becoming their best selves—but as they grow, unchecked influences can narrow that potential. Today, social media, while informative in parts, often distorts, misleads, and fosters negativity, especially among impressionable minds.Madhu Bangarappa, Minister of Primary & Secondary Education and Sakala of Karnataka
And then came ChatGPT. On November 30, 2022, OpenAI released its large language model chatbot to the public. Within five days, it had one million users. Within two months, one hundred million. It became, as the Harvard Gazette noted, the fastest-adopted technology in human history. Generative AI was not merely a new tool. It created a new cognitive environment. Researchers who had spent years documenting the neurological effects of internet use and social media found themselves confronting a phenomenon of an entirely different order. The internet had changed how we access information. Social media had changed how we seek social validation. Artificial intelligence was proposing to change how we think.
In January 2025, a study published in the journal Societies found a significant negative correlation between frequent AI tool usage and critical thinking abilities. The more participants used AI to perform analytical tasks, the less they engaged those analytical faculties themselves. These faculties became weaker. That same year, a study presented at the prestigious CHI Conference on Human Factors in Computing Systems, co-authored by researchers including those affiliated with Microsoft Research, found that knowledge workers who used generative AI heavily reported self-assessed reductions in cognitive effort and showed measurable increases in automation bias. Workers were, in effect, subcontracting their judgment to a machine and in doing so, finding their own capacity for judgment slowly atrophying. The concept has acquired a clinical-sounding name—AICICA (AI-Induced Cognitive Impairment in Cognitive Abilities). The framework draws on the well-established “use it or lose it” principle of neuroplasticity.
On the other hand, A 2024 study published in Science Advances by economists Anil Doshi and Oliver Hauser found that generative AI tools measurably enhance individual creativity: users who worked with AI assistance produced more original and diverse outputs than those working alone. The same study, however, found a darker corollary: while individual creativity increased, collective creative variety decreased.
The good news is that the negative neuroplastic effects of excessive digital use are not permanent. Digital detoxes, attention training, mindfulness practice, and modified usage habits have all been shown to reverse problematic neural adaptations within weeks to months. Then there are phones like the Light Phone, designed to be used as little as possible, stripping away social media, browsers, and endless apps. The brain that can be rewired can be rewired again. A possible solution?
- Preserving the practices that build deep neural circuits: sustained reading, writing by hand, memory practice, independent problem-solving.
- Designing AI tools that prompt critical reflection rather than passive consumption—for instance, tools that ask users “Does this align with what you know? What might be missing?”
- Recognising that the brain retains neuroplasticity throughout life, meaning that problematic neural adaptations from excessive digital use can, as research consistently shows, be reversed within weeks to months through digital detox, attention training, and modified usage habits.
Go back to Chauvet, one final time. Stand, in imagination, beside whoever it was who pressed a palm against that limestone wall 36,000 years ago and blew red ochre through a hollow bone, leaving a perfect handprint in negative—a personal signature, an act of self-assertion against the indifferent dark. That unknown person had the same capacity for abstraction and symbolic thought and neuroplasticity. The difference between that cave painter and us is not the brain. It is the environment the brain is asked to navigate. It once navigated a world of physical immediacy and social intimacy, of fire and stars, and seasonal change. It was a world that, for all its violence and hardship, was calibrated to the cognitive apparatus that evolution had built. We now navigate a world of infinite information, algorithmic curation, synthetic social connection, and artificial intelligences sophisticated enough to persuade us we no longer need to think for ourselves. The Smithsonian’s Human Origins Program reminds us that the modern brain—the one that passed its architectural final exam between 1,00,000 and 35,000 years ago—did so in the context of climate catastrophe and existential threat. It was shaped by difficulty; maintained by effort, and three million years in the making. Available, in all its baroque complexity, to anyone who still chooses to use it. The cave is there. The hand is yours. The question is what you will press it against.