Is OpenAI crisis a pointer to alien intelligence taking over?

The increasing use of AI to manufacture deep fakes on social media has the Indian government scrambling to develop a regulatory framework to root out the problem.
Sam Altman. (Photo| AP)
Sam Altman. (Photo| AP)

The sacking of Sam Altman, founder of OpenAI, his return to the pioneering tech company on popular demand, and then the return sacking of the board that did him in – all within a week – has let loose massive reverberations in the technology world. For those who harboured fears about artificial intelligence (AI), and those who did not know what AI is all about, it was a week of learning.

Last week, on Friday, 17 November, the board of the 2015 start-up, today the most influential company in the fledgling AI industry, fired Sam Altman along with co-founder Greg Brockman on a Google Meet call.
By Monday, Microsoft, a shareholder in OpenAI having pumped in $13 billion, said Altman and Brockman would join Microsoft to run a new AI research group. To queer the pitch, over 500 employees of the startup threatened in an open letter to quit and join Altman at Microsoft if Altman was not brought back.

In an ironical twist, Sam Altman on Wednesday returned to lead OpenAI. The first thing he did was sack the Board that had fired him the previous week. The only survivor on the Board was Adam D’Angelo, CEO of Quora.

Project Q* warning

It has the script of a Hollywood blockbuster. The only problem is AI is now at everyone’s doorstep.The increasing use of AI to manufacture deep fakes on social media has the Indian government scrambling to develop a regulatory framework to root out the problem.

Celebrities like Shahrukh Khan and Virat Kohli are pointing to their morphed images and voices on social media being used to support some betting apps. Prime Minister Modi chose to raise the issue of misuse of AI on Wednesday at the G20 leaders’ virtual summit, calling for global action on the subject.

Though things are still unravelling at OpenAI, the corporate battle there holds the key to many of the questions.OpenAI, as many know, is an artificial intelligence research laboratory building and developing advanced language models. Among its most successful product, and one which made the startup and Sam Altman a household name, is ChatGPT — an AI assistant that can write your CV, and produce a presentation on nuclear physics.

One of the triggers of the Altman coup was a letter written by staffers to the board that warned against the commercial launch of an artificial intelligence discovery that they said could threaten humanity. It was about Project Q* (pronounced Q-Star),seen as a breakthrough in what is called artificial general intelligence (AGI). It is loosely defined as an autonomous computing system using math that surpasses humans in most economically valuable tasks.

Analysts say at the current level AI has the capability to write and translate language by statistically predicting the next word; but answers to the same question can vary widely. However, when you graduate to math — where there is only one right answer — it implies AI would have greater reasoning capabilities resembling human intelligence. This was red-flagged as a potential danger.

At a general level, there have been fears that AI could get out hand and pose a danger to the human race. How true are these concerns? Quite recently in May 2023 a nonprofit research and advocacy organization, Center for AI Safety, said: “The risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” It was taken seriously as it was signed by many tech godfathers.

Sample this: tasked with reserving a table at a popular restaurant, AI shuts down mobile phone networks and traffic lights in order to prevent others from getting the table! It is efficient at achieving goals, but without the moral values of its creators.

Philosophical crisis

But this is carrying the evidence too far. In an article republished by the Scientific Americanrecently, the authors say: “… there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense.” The danger rather is “it can degrade abilities and experiences that people consider essential to being human.”

When daily judgments – from what loan to opt for and which TV programme to watch in leisure time – is farmed out to AI algorithms – then people will slowly lose the capacity to take decisions. That’s the biggest fear.

ChatGPT’s writing abilities for instance has ended the need to write original assignments by say Phd students or  sales executive working on a marketing plan. In effect, it is eliminating our ability to think critically. AI also has the frightening ability to produce deep-fake videos and audios. Cybercriminals can use AI voice cloning for high-tech heists and bank scams.

But is the lurking danger big enough to equate it to the level of a Covid pandemic or a nuclear war? Probably not. Critical analysts say existing AI applications, including the coming Artificial General Intelligence (AGI), are still at the level of executing specific tasks rather than able to make human-like judgments.

Yet the possibilities are scary. We don’t know where Project Q* can lead to. It is good someone is blowing the whistle; and even corporate giants are worried about producing ‘intelligence’ bigger than themselves.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com