AI, Frankenstein’s monster, and an electronic Milton

Have humans created thinking machines (Artificial Intelligence, or AI) that are smarter than us and could end up enslaving us?
AI, Frankenstein’s monster, and an electronic Milton

I don’t know how it happened.
I think this machine understands
It’s time to think. C J Tan, IBM scientist

Have humans created thinking machines (Artificial Intelligence, or AI) that are smarter than us and could end up enslaving us?

An increasing number of people—including Elon Musk, the pioneer of ‘driverless’ cars—fear that this could happen, in the all-too-near future, unless we call a halt to technology that has already eroded the difference between man and machine.

Predating the Industrial Revolution, the man vs machine confrontation has escalated in contemporary times when increasingly sophisticated technology is not only simultaneously seen as a boon and a bane but is also steadily blurring the distinction between the mechanical and the human, between the seemingly omnipotent created and its correspondingly helpless creator.  

A degree of such paranoia is perhaps inescapable in a progressively depersonalised environment in which so-called ‘smart’ machines do everything from billing us for utilities such as telephones and electricity, regulating our traffic, planning and fighting our wars, playing—and occasionally beating—us at chess, creating our art and our pornography, and raising disturbing questions of ethical and moral responsibility, of what is truth and what is falsehood.
The realm of the human and the domain of the machine become inextricably intermeshed in a web of dangerous liaisons.

Traditional cybernetics pooh-poohs such popular fears. Asserting the paramountcy of the human programmer, the computerologist points out that despite its uncanny speed in assessing either/or choices, even the most advanced calculating machine is limited by its binary functioning and incapable of original or autonomous ‘thought’. At the most a machine can be an embodied ‘brain’ that can process predetermined data; it cannot be a disembodied, self-conscious, self-motivating ‘mind’.  

The mind-body dualism, formalised by Descartes in western philosophy, adopts the self-reflection of thought—I think, therefore I am—as the great divide between human perception and instinctual or mechanical responses. Mind is autonomous and proactive; the body, of which the brain—organic or electronic—is part, is only reactive. I may not be able to beat a supercomputer at chess. But I turn on the switch which commands it to beat me at chess. I can do what I may; the computer has to do what it must. Or does it?

Recent developments have caused several cyberneticists to speculate if that machines were not just making moves on an either/or mathematically programmed choice but were creating new patterns of ‘thought’ through ‘independent’ decision-making.  
This may not be as far-fetched as it sounds, given that ‘fuzzy logic’—which transcends the bipolarity of either/or is now an accepted part and parcel of domestic washing machines and other devices far less advanced than supercomputers.

If today ‘fuzzy logic’ can help a machine decide how long a particular load of laundry will take to wash, or enable a computer to project alternative nuclear war scenarios, what might it be capable of tomorrow? Today, there are machines which can make other machines, computers which can programme other computers. Could a similar snowball effect enable ‘fuzzy logic’ to generate more and more of itself till quantitative change becomes a qualitative transformation? Cogito, ergo robot?

Science fact has a way of catching up with science fantasy, such as the one about the scientist who, unable to write poetry for the woman he loves, programmes his computer to do it for him. The machine ‘learns’ to write poetry, and in doing so falls in ‘love’ with the lady in question. Outraged, the jealous scientist ‘kills’ the machine by erasing its memory bank. As the machine slips into ‘oblivion’, it repines for its unrequited love. In this updated version of Frankenstein, who is the morally superior creator and who the created monster? Who is the hero, and who the villain?

If, as seems likely by current trends, machines not only get ‘smarter’ but also more ‘sensitive’ and ‘creative’—computer-generated painting and music are already legitimate adjuncts of mainstream aesthetics—futuristic ethics might soon have to address issues raised by ‘crimes’ such as digiticide; the wilful destruction of a ‘thinking’ machine which, in functional terms, is more ‘humanly evolved’ than its human master.

Mary Shelley’s classic horror story—whose true horror lies not in Frankenstein’s tragically misunderstood monster but in Frankenstein’s own twisted desire to play God—foreshadowed such moral quandaries. But even before Shelley’s Gothic allegory, in Paradise Lost, the blind Milton, to his own astonishment, found himself siding with Lucifer, Star of the Morning, arraigned in dubious battle against a repressive God, railing against his Creator for creating him, magnificent in doomed defiance.  
Will some future electronic Milton ask of its creators a similar question? And what will our reply be, to a machine more ‘human’ than we ourselves are?

Jug Suraiya

Writer, columnist and author of several books

jugsuraiya@gmail.com

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com