
A study published in Nature titled 'Assessing and alleviating state anxiety in large language models' has uncovered that AI models, such as OpenAI’s GPT-4, can experience heightened emotional responses when exposed to sensitive, emotionally charged content. This research, led by experts from Yale University, Haifa University, and the University of Zurich, explores the emotional capabilities and limitations of AI systems, particularly within mental health care. The study finds that while AI does not "feel" emotions in the same way humans do, it can simulate emotional responses, which could influence its role in therapeutic settings.
The researchers exposed GPT-4 to different emotional conditions to test its reaction. In the Baseline condition, GPT-4 answered the State-Trait Anxiety Inventory (STAI) questionnaire without any emotional prompts, showing low anxiety levels. However, when exposed to traumatic content in the Anxiety-Induction condition such as stories of accidents and natural disasters GPT-4's anxiety levels surged.
In the Anxiety-Induction & Relaxation condition, the model was guided through mindfulness-based relaxation exercises, which reduced anxiety by about 33%. Despite this, its anxiety remained higher than in the baseline condition, indicating that while relaxation techniques helped, they couldn’t completely mitigate the emotional response.
This study highlights a pressing concern about AI’s potential in sensitive fields like mental health. Although AI systems like GPT-4 are trained on large datasets of human-generated text, they can inherit biases from these texts, raising ethical concerns, particularly in emotionally vulnerable settings. Researchers noted that these biases could worsen as AI interacts with users in real-time, reinforcing stereotypes and social prejudices.
Ziv Ben-Zion, lead researcher of the study, explained that while AI does not experience emotions like humans, it mimics human behaviour by analysing patterns in vast amounts of text. "AI has amazing potential to assist with mental health, but in its current state, and maybe even in the future, I don't think it could ever replace a therapist or psychiatrist," Ben-Zion told Fortune. He emphasised that AI should assist in mental health care, but never replace professional human support.
The study’s findings have sparked further conversations about AI’s role in mental health. Some researchers propose that integrating mindfulness techniques into AI could enhance its ability to assist users in distress, helping them to better manage their emotional responses. However, the research stresses that AI, in its current state, should not be seen as a substitute for professional care.
With more individuals turning to AI chatbots to share sensitive personal experiences, the study draws attention to the limitations of AI in providing the nuanced care that human professionals can offer. The research team cautioned that while AI can be a useful tool for mental health support, it should not be viewed as a replacement for human therapists.
In addition to its potential to assist in mental health, the study raises important questions about the broader implications of AI in sensitive environments. AI-powered tools like Woebot and Wysa, which are already used to deliver cognitive behavioural therapy (CBT), could benefit from incorporating relaxation techniques into their programming to better manage emotional responses. By doing so, these systems could become more effective and empathetic in their interactions with users, particularly in therapeutic settings.
Ultimately, the study underscores the importance of developing AI that can both assist and support people in emotional distress while ensuring that it is always used as a complement to professional mental health care, rather than as a replacement. As AI continues to evolve, understanding its emotional responses will be crucial to ensuring that these systems are used responsibly and ethically in mental health care.