Does AI dream of digital sheep?

Richard Deckard, a character in a Philip K Dick novel that was made into a film, wondered whether androids dream. Today, we know AIs hallucinate—they make up false things at times. Who should be held legally responsible in such cases?
Actor Harrison Ford played Rick Deckard in the 1982 film adaptation of Blade Runner.
Actor Harrison Ford played Rick Deckard in the 1982 film adaptation of Blade Runner.Photo | Wikimedia commons

In Philip K Dick’s Do Androids Dream of Electric Sheep? readers are introduced to a post-apocalyptic earth contaminated by radioactive debris from a global nuclear war. The war has resulted in the extinction of the majority of animals and increased the pressure on humans to emigrate to off-world colonies. The promise of a personal, extremely human-like android servant is used to motivate humans to leave the planet. However, as the androids start to rebel, they are prohibited from being on earth. But a number of these androids manage to escape. A policeman named Richard Deckard is tasked to take on the role of an android bounty hunter, to find and ‘retire’ the androids.

Beyond the ethics of the task itself, the novel’s central concern is that of the place of androids and other extremely human-like autonomous systems. Are they sentient? Do they have a soul? And as Deckard asks himself, do androids dream?

While today’s most advanced artificial intelligence (AI) systems—think of the ChatGPTs and Geminis—are far from achieving sentience or being capable of dreaming of a greater purpose, they do ‘hallucinate’. These AIs are stochastic parrots that chatter convincingly without understanding and lack consciousness. They are trained using massive datasets that enable them to recognise, translate, predict, or generate text or other content.

However, they can also make things up and make claims that are untrue. This isn’t a problem of bad inputs, but rather a function of the way large language models (LLMs), that underpin AI work, works. But who should be held accountable when things go wrong?

Where one’s right is invaded or destroyed, the law gives a remedy to protect it or provides compensation (ubi jus ibi remedium). Thus, whenever an autonomous system goes wrong or causes harm, someone has to be held liable for the damage. Can the autonomous system itself be held responsible for the wrong then? But, if soul, sentience and salvation, much like the act of dreaming, are very human-like, how can a non-sentient system be held responsible?

While the need to assign responsibility and establish liability for AI is obvious, the idea that such systems be assigned legal persona—much like corporate entities—is quite new. At present, apart from humans, both private and public bodies can qualify as legal persons if the legislature attributes ‘legal subjectivity’ to them. Legal subjectivity is attributed by positive law, just like subjective rights depend on objective law. For instance, in the case of companies, the attribution results in the creation of a legal fiction, in the separation of ownership from the management of the entity and limits liability. This separation, pertinently, does not absolutely protect the humans behind the company from liability, as the ‘corporate veil’ can be lifted where the corporate form is being used for some manifestly improper or fraudulent purpose.

However, unlike a company where the actual decisions are made by humans, in the case of an LLM, existence of hallucination would mean there is no application of mind by the human owning the LLM, but only an employment of a performative speech act that generates incorrect or misleading results. It is not a result of the capability to engage in intentional action and, thus, lacks legal agency. As such, without self-consciousness and autonomous agency, the attribution of legal persona, at present, is more of a political question.

How then to deal with the misleading information provided and breaches caused by such autonomous systems? A recent ruling in a Canadian civil resolution tribunal, in Moffatt vs Air Canada, which follows similar decisions in the US and Hong Kong, is a significant development. The case examines whether a company can be held responsible for inaccurate information given by an AI chatbot on its website. The ruling states that a firm can be held responsible for false statements and misleading information (caused by hallucinations) provided by a chatbot on a publicly accessible commercial website. According to the tribunal, “The applicable standard of care requires a company to take reasonable care to ensure their representations are accurate and not misleading.” It further held that the chatbot, as asserted by Air Canada, could not be considered a separate entity and thus be absolved of liability.

This is so as an electronic agent—a chatbot employed by a person—is a tool of that person, the court held. Ordinarily, the employer of a tool is responsible for the results from its use because the tool has no independent volition. When computers are involved, the requisite intention flows from the programming and use.

Blaming the employer for the actions of his tool, in such cases, does not exhibit a dislike for technology, but reflects the fact that the autonomous system functions solely based on the information and instructions provided. The employer is thus to be placed under a duty of care to monitor the autonomous system employed and prevent any damage resulting from its use, the court ruled. Moreover, holding the employer liable for the actions of the autonomous system also seems practicable as it provides legal clarity for the victim. 

When seen as a tool, there is little trouble in attributing the consequences of hallucinations and resulting actions of an LLM to the employer company.

Saai Sudharsan Sathiyamoorthy

Advocate, Madras High Court

(Views are personal)

Related Stories

No stories found.
The New Indian Express
www.newindianexpress.com