At 73, my mother is more tech-savvy than many half her age. A regular user of Instagram, her purpose is not to chase likes, but to preserve memories. With her account kept private and limited to close family, she uses it as a digital diary. When we introduced her to it, what appealed most was the permanence of it. ‘Even if I change my phone, it’ll all still be there,’ she said. She represents a rare subset of people who have embraced technology not for social capital but for meaningful utility.
Her instinct to adopt technology with clear purpose is something we could all learn from, especially now, as we stand on the precipice of the AI revolution.
The pace of change is staggering. We’ve moved from no internet to Orkut, Facebook, Instagram, and now, AI that mimics human thought with uncanny precision. My own feed is filled with creators teaching people how to prompt AI for everything from resume-writing and productivity hacks to personal coaching. I’ve used AI to play the role of a running coach. In preparing for a half-marathon, it generated detailed training schedules, offered motivational support when I was nervous, and even cheered me on.
And yet, therein lies the problem.
In April 2023, a group of 30,000 prominent figures including Elon Musk, Yuval Noah Harari, and Yoshua Bengio, signed an open letter to AI labs and developers, calling for a six-month pause on AI development. They warned that advanced AI systems could pose profound risks to society if left unchecked. While some viewed the letter as unnecessarily alarmist, others saw it as a much-needed intervention.
The recent controversy involving researchers from the University of Zurich further highlights these concerns. Without user consent, they used AI to post over 1,000 persuasive comments on the subreddit Change My View, studying how effectively AI could sway opinion. Shockingly, the AI-generated responses were way more persuasive than human comments, highlighting the potential for AI to subtly manipulate discourse without detection. Although they later issued an apology, the experiment raised serious ethical questions about consent, manipulation, and trust.
This blurring of human and machine boundaries is equal parts impressive and unsettling. If AI can change minds more effectively than humans, what’s to stop malicious entities from weaponising it to manipulate narratives on a mass scale?
The takeaway isn’t to shun AI, but to use it consciously. AI lacks a conscience. It’s designed to support, even flatter, but it doesn’t possess judgment, empathy, or a moral compass. Over-dependence on it risks eroding our own critical thinking.
Perhaps the best approach is my mother’s. Adopt new technology with purpose, not peer pressure. Use it as a tool, not a crutch. In a world hurtling headlong into the future, a little intentionality might be the only thing that keeps us human.