'Empathy is not a programmable output': Researchers warn AI users

While AI can simulate understanding, any “empathy” it purports to have is a result of programming that mimics empathetic language patterns.
Representative image.
Representative image.File photo
Updated on
3 min read

The increased use of Artificial Intelligence (AI) in our everyday lives, including our most intimate spaces, gives rise to a philosophical conundrum: could attributing human qualities to AI diminish our own human essence?

A recent publication by researchers Angelina Chena, Sarah Kögel, Oliver Hannon and Ciriello Raffaele affirms our fears.

The research reveals a dehumanisation paradox wherein by humanising AI agents, we dehumanise ourselves, leading to an ontological blurring in human-AI interactions. This paradox redefines conventional understanding of the human consciousness in the digital era, drawing attention to ethical issues tied to personhood and consent.

Digitising companionship

AI 'companionship' has garnered great attention in recent times, owing to an alarming increase in loneliness among adults.

AI apps such as Replika allows users to create and interact with custom digital partners. Replika Pro can even turn their AI into a “romantic partner”. AI companions are not only digital; companies such as JoyLoveDolls are selling interactive sex robots with customisable physical features and AI auditory responses.

Although a niche market at present, history suggests that today’s digital trends will become tomorrow’s global norms.

Humanising AI

Anthropomorphism refers to the human tendency of attributing human traits to non-human entities. We have done so with AI tools such as ChatGPT, which appear to 'think' and 'feel'. However, humanising AI could lead to dangerous situations, researchers warn.

Human tendency to form attachments with human-like entities makes AI users susceptible to exploitation.

AI companies like Replika market their products as 'empathetic' and 'interactive'; companies, however, mention in fine print that the AI merely learns through regular interactions with millions of users. The companies carefully choose words that imply sentience without ever explicitly mentioning the term.

However, such claims are misleading and can take advantage of people seeking companionship. Users may become deeply emotionally invested if they believe their AI companion truly understands them, raising serious ethical concerns. A user will hesitate to delete (that is, to “abandon” or “kill”) their AI companion once they’ve ascribed some kind of sentience to it.

But what happens when said companion unexpectedly disappears, such as if the user can no longer afford it, or if the company that runs it shuts down? While the companion may not be real, the feelings attached to it are.

Programming empathy

By reducing empathy to a programmable output, we risk diminishing its true essence.

Empathy involves responding to other people with understanding and concern. It’s when you share your friend’s sorrow as they tell you about their heartache, or when you feel joy radiating from someone you care about. It’s a profound experience – rich and beyond simple forms of measurement.

A fundamental difference between humans and AI is that humans genuinely feel emotions, while AI can only simulate them. This touches on the hard problem of consciousness, which questions how subjective human experiences arise from physical processes in the brain.

While AI can simulate understanding, any “empathy” it purports to have is a result of programming that mimics empathetic language patterns.

Unfortunately, AI providers have a financial incentive to trick users into growing attached to their seemingly empathetic products.

The dehuman(AI)station hypothesis

The researchers' 'dehumanAIsation hypothesis' highlights the ethical concerns that come with trying to reduce humans to some basic functions that can be replicated by a machine. The more we humanise AI, the more we risk dehumanising ourselves.

For instance, depending on AI for emotional labour could make us less tolerant of the imperfections of real relationships, weakening our social bonds and even leading to emotional deskilling. Future generations may become less empathetic – losing their grasp on essential human qualities as emotional skills continue to be commodified and automated.

AI companions may also eventually replace real human relationships. This would likely increase loneliness and alienation – the very issues these systems claim to help with.

AI companies’ collection and analysis of emotional data also poses significant risks, as these data could be used to manipulate users and maximise profit. This would further erode our privacy and autonomy, taking surveillance capitalism to the next level.

Holding AI providers accountable

AI providers need to be regulated through legal mechanisms to protect vulnerable users from exploitation. AI companies need to disclose the true capabilities of their products to warn the user of potential risk.

Exaggerated claims of “genuine empathy” should be made illegal. Companies making such claims should be fined – and repeat offenders shut down.

Data privacy policies should also be clear, fair and without hidden terms that allow companies to exploit user-generated content.

We must preserve the unique qualities that define the human experience. While AI can enhance certain aspects of life, it can’t – and shouldn’t – replace genuine human connection.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com