Sonnet or speech, seek bot

The nightmare of machines becoming incomprehensibly powerful, a staple of dystopian science fiction for over a century, is becoming a reality.
Image used for representational purpose only. ( File Photo)
Image used for representational purpose only. ( File Photo)

Narendra Modi’s speechwriter is an endangered species. So is anyone else who does process-defined writing for a living, like a lawyer’s assistant, a website developer or a technical translator—the guy who translates user manuals for devices ranging from toasters to MiG fighters. Equally endangered is anyone who makes commercial art—illustrations, book jackets, posters. Maybe even an icon like punk queen Vivienne Westwood, who died last week aged 81, could be restricted to Andy Warhol’s (or was it his photographer Nat Finkelstein’s) “15 minutes of fame”, if they were born in our era. The nightmare of machines becoming incomprehensibly powerful, a staple of dystopian science fiction for over a century, is becoming a reality.
 
Since its release on November 30, the social media footprint of the AI chatbot ChatGPT (formally GPT3) has grown to rival that of celebrities and public figures. Easily accessed by a chat interface that requires no knowledge of coding or symbolic logic, it invites all sorts of people to prod it to produce poems, school essays, commercial art, Python code and hilarious instruction for exiting the UNIX editor Vim, written in the idiom of the Old Testament: “Be ye not as the heathen, who knoweth not the ways of the Lord, and therefore remain forever trapped in the depths of the editor…”
 
But surprisingly, ChatGPT’s prowess has not rekindled speculation that a machine has finally passed the Turing test, which was quite the craze in the early days of the internet. The wartime code-breaker and pioneering computer theorist Alan Turing formulated his test very simply in 1950: If a machine designed to mimic humans communicates in an idiom indistinguishable from theirs, it should be regarded as human intelligence.
 
Turing mania began with the first ‘chatterbots’—machines which interacted with humans through typed messages using natural language processing. The pioneer was Eliza, written at MIT in the mid-Sixties by Joseph Weizenbaum, one of the founding fathers of AI. It was named after Eliza Doolittle in Pygmalion because it was constantly improved by interactions with people—just like its remote descendant ChatGPT. Julia was born in the Nineties, a child of the internet. Written by Michael Mauldin (who also wrote the Lycos search engine), she was Turing-grade in short conversations, but in longer chats, she gave away her machine identity by constantly trying to steer conversations to stuff she knew. Her favourite issue in the Nineties was dog people versus cat people. For people who were neither, the conversation quickly became artificial.
 
The website bots which have replaced human-run helplines are the descendants of Eliza and Julia. They’re limited in their ability to access and process information, and can only perform specific tasks, usually indifferently. GPT3, the third iteration of OpenAI’s Generative Pre-trained Transformer, is a different beast, learning from resources all over the internet, including the input queries of people accessing it. OpenAI in San Francisco was founded by Sam Altman (of Y Combinator), Elon Musk, Peter Thiel, Amazon Web Services and others in 2015, who committed to collaborating with the industry and keeping its research open to the public.
 
In the public imagination, ‘open’ suggests free—as in free beer. Which suggests, in turn, that open source is woolly-headed altruism. Free software pioneer Richard Stallman has been railing against this misrepresentation for decades, but no one ever listens. Maybe OpenAI will solve his problem, too, because it’s quite commercial. It’s frightfully expensive to run the machine that spits out those extraordinary poems and essays, and OpenAI, which was founded as a nonprofit, is now a for-profit enterprise that caps its margin at 100X. The speed breaker is also a marker of how much money the technology could make.
 
GPT3 is not the only machine in the ring. GPT4, the next iteration, is expected soon. And Google, which ran the first ‘transformer’ (that’s the T in ChatGPT) back in 2017, tested PaLM (Pathways Language Model), a rig much bigger than ChatGPT, in April 2022. For years, BERT has run in the background of its search engine, helping it to understand the context of our searches. If Google seems to magically know what you were looking for when you were vague yourself, it’s this model at work behind the scenes. People in the industry surmise that Google will one day deploy AI at a huge scale across all input/output classes—text, image, sound, and code. Its effect on the data ecosystem would be extraordinary.
 
For now, though, Google is running in quiet mode. ChatGPT has the world’s eyeballs because it is the first AI architecture to invite public interaction. It has its faults. It’s loaded with filters to strip out racist and sexist queries, but it’s susceptible to stereotypes, which underlie both. Someone fed it stereotypes of the Indian states, and it created snapshots of their people. The uproariously happy veteran from Punjab wears campaign ribbons and holds a beer. The guy from West Bengal is a bearded philosopher. But apart from these attributes, they do not look much like people you might know from those states.
 
Some users also complain that with the right prompts, ChatGPT can lie convincingly. In 2001: A Space Odyssey, HAL 9000, the onboard computer on whom the plot hinges, lost his mind because his makers required him to lie, but machines can’t deviate from the truth. ChatGPT suffers from no moral crisis and can lie as earnestly as a campaign manager. It is capable of propaganda. Which means, of course, that it’s curtains for Narendra Modi’s human speechwriter. No competition!  

Pratik Kanjilal

Editor of The India Cable

(Tweets @pratik_k)

Related Stories

No stories found.
The New Indian Express
www.newindianexpress.com