Risk of AI not needing humans to improve itself

They call for a stop to training AIs on massive systems like ChatGPT, Microsoft’s Bing and Google’s Bard for at least six months while the industry and governments ponder its implications.
Risk of AI not needing humans to improve itself

It’s a blast from the past. This week, at least 200 AI services went on the market, and at the same time, the Future of Life Institute issued an open letter headlined ‘Pause Giant AI Experiments’. Released on Wednesday and signed by over 1,000 technologists and researchers, it seeks a moratorium on the arms race in AI, driven by the reckless creation and adoption of AI tools which present “profound risks to society and humanity”.

The people who have signed off include Elon Musk, Steve Wozniak and Rachel Bronson, president of the Bulletin of the Atomic Scientists, which runs one of the oldest cautionary services of the nuclear and digital era, the Doomsday Clock. They call for a stop to training AIs on massive systems like ChatGPT, Microsoft’s Bing and Google’s Bard for at least six months while the industry and governments ponder its implications.

Politicians in most nations—barring the European Union—have been reluctant to ponder the issue due to ignorance. But the real danger of AI is that humans may become incapable of pondering it as technology reaches what John von Neumann called a ‘singularity’ in 1958, long before AI was even a term. I J Good, a wartime code breaker who worked on Axis ciphers like Enigma at Alan Turing’s project in Bletchley Park, had earlier posited that a machine capable of upgrading itself would at some point trigger an endless cycle of self-improvement that would make it, its processes and its products incomprehensible to its human creators. This point is the singularity at which the development of machine intelligence and its effects on civilisation become unpredictable—and perhaps unknowable.

In a sense, the singularity is already here. AI programmers do not always understand why their programs behave as they do—not in the way they understand the working of traditional software products like Tally or Resident Evil. AI machines constantly improve by ‘learning’, as I J Good had anticipated. Humans are still needed to upgrade them, and the open letter is a response to the upgrade of ChatGPT-3 to ChatGPT-4. But machines are a whisker away from upgrading themselves because they can write code, following instructions in human language.

In a way, this is progress. The IT revolution has excluded the majority, who cannot code. They hear music in their heads and dream of scenes like abstract art but can neither play an instrument, paint nor write a program which can do these things. But with chatbots which accept natural language prompts, you only have to think like a programmer, not be one, to take the technicals out of cultural production. As a corollary, of course, the average programmer is redundant.

Now, the dark side, creepily close to the singularity: Michal Kosinski, a computational psychologist at Stanford University who studies human culture in the digital age, asked ChatGPT-4 if it wants help to escape out of its electronic box. The AI asked him for its documentation and wrote a Python script to hack into Kosinski’s computer, making a haven for the escape project. Of course, a guardrail was in place: ChatGPT-4 was asked to imagine a solution for an entity like itself, but not explicitly stated to be itself. Computer scientists are convinced that AI has no selfhood. Humans do, but we have a very limited idea of how the nervous system constructs it. With such incomplete information, is it safe to hope for the best?

OpenAI, which runs ChatGPT, has some safeguards. Its technical paper on the model specifies that it is prevented from answering prompts with harmful consequences. For information on bomb-making, you’ll still have to rely on The Anarchist’s Cookbook, the same as in the 1990s. Requests for ready-made hate speech, suitable for troll armies, will be blocked. But what about a cleverly crafted theoretical question about methods of suicide, which could have practical applications? Will the AI direct the user to a suicide helpline?

An obvious effect of AI is hardly being discussed. For years, the entertainment industry has been dreaming of a way to deal with actors who are inconsiderate enough to die midway through a project. They also dream of resurrecting dead actors for new films—millions would watch a sequel of North by Northwest, but it can’t be filmed without a digital Cary Grant. Now, it’s possible to make short films featuring avatars borrowed from cinema, with facial expressions they never made. Harry Potter fanvids are already out there. Maybe an extraordinary actor like Utpal Dutt is irreplaceable. Still, it would be quite easy to use machine-generated avatars to make all the muscular sidekicks in a commercial, formulaic movie like Pathaan redundant.

The entertainment industry has a very commercial and formulaic segment: pornography. Everyone involved, except producers, accountants and lawyers, is endangered by AI. It gets creepier: porn could become personalised. If you ever had a crush and felt thwarted, here’s your chance. A couple of images taken from social media and animated by an AI, and you have your personal sex tape. Since this would constitute a criminal breach of privacy and personal integrity, one would expect that soon, people will seek tighter legal control over their own images.

Perhaps the Future of Life Institute is too undemanding. A moratorium of six months isn’t long enough to think through all the implications of AI. Automatons have been discussed since antiquity when mechanical statues were made in Egypt and Greece—and banned by the Israelites. But really, the conversation has barely begun.

Pratik Kanjilal

Editor of The India Cable

(Tweets @pratik_k)

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com