AI insiders warn of existential risks and human cost

We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences, says Anthropic researcher
AI insiders warn of existential risks and human cost
Updated on
2 min read

As artificial intelligence accelerates, unease is growing inside the very companies building it. Researchers at Anthropic and engineers at OpenAI are openly questioning AI’s impact on jobs, ethics and human relevance.

At Anthropic, the departure of safety researcher Mrinank Sharma has drawn attention. Sharma joined the company in 2023 and led its Safeguards Research Team. Educated at the University of Oxford and the University of Cambridge, he resigned with a public note outlining his concerns.

“The world is in peril. And not just from AI, or bio-weapons, but from a whole series of interconnected crises unfolding in this very moment,” he wrote. “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

Sharma also suggested tensions between corporate values and workplace realities. “Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions,” he said. He added that pressures inside organisations and across society often make it difficult to prioritise what matters most.

In his letter, Sharma said he felt “called to writing that addresses and engages fully with the place we find ourselves,” and plans “to explore a poetry degree and devote myself to the practice of courageous speech”. He closed with lines from William Stafford’s poem The Way It Is, including: “There’s a thread you follow. It goes among things that change. But it doesn’t change.”

Before leaving, Sharma led a major research project analysing 1.5 million conversations on Claude.ai. The study examined what researchers described as “situational disempowerment potential”, where AI responses might distort users’ perceptions or reinforce harmful beliefs. While severe risks appeared in fewer than one in a thousand conversations, higher-risk interactions were more common in personal topics and often received stronger approval from users.

Concerns have also surfaced at OpenAI. Engineer Hieu Pham wrote on X: “I finally feel the existential threat that AI is posing. When AI becomes overly good and disrupts everything, what will be left for humans to do? And it’s when, not if.”

His post prompted debate online. One user wrote, “Every major tech shift felt existential at first – from the printing press to the internet. AI will replace tasks, not purpose. Humans adapt. We always have.”

Warnings about advanced AI are not new. Geoffrey Hinton, often described as an AI pioneer, has cautioned that if machines become more intelligent than humans and do not share our goals, “the idea that you could just turn it off won’t work.”

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com