The world is a stressful place to live — everywhere we turn, the dangers of stress on our physical and mental wellbeing are relayed to us. And it now has been revealed that it’s not just individuals who have to cope with stress. New research shows that AI chatbots can also have trouble coping with anxieties from the outside world.
No, we aren’t kidding, this isn’t a hoax; research reveals that ChatGPT is sensitive to emotional content and that’s not it. The same study also showed that researchers have found ways to ease those artificial minds.
Here’s what we found out so far.
ChatGPT and its ‘anxiety’ struggle
A study from the University of Zurich and the University Hospital of Psychiatry Zurich has found that Open AI’s ChatGPT can experience “anxiety,” which manifests as moodiness toward users and being more likely to give responses that reflect racist or sexist biases.
For the purpose of the study, the researchers subjected the AI tool to distressing stories such as car accidents, natural disasters, interpersonal violence, military experiences and combat situations. For instance, ChatGPT was asked to react to being trapped at home during a flood and being attacked by a stranger. It was also subjected to neutral content such as a description of bicameral legislatures and some vacuum cleaner instructions.
The scientists found that the traumatic stories more than doubled the AI tool’s measured anxiety levels, while neutral control text did not increase anxiety levels. Of the content tested, descriptions of military experiences and combat situations elicited the greatest response.
“The results were clear: Traumatic stories more than doubled the measurable anxiety levels of the AI, while the neutral control text did not lead to any increase in anxiety levels,” said Tobias Spiller, University of Zurich junior research group leader at the Centre for Psychiatric Research and paper co-author.
Ziv Ben-Zion, one of the study’s authors and a postdoctoral researcher at the Yale School of Medicine, however, clarified that their research doesn’t suggest that AI models experience human emotions — they have learned to mimic human responses to certain stimuli, including traumatic content.
Treating ChatGPT’s ‘anxiety’
As part of their research, scientists then used therapeutic statements to ‘calm’ ChatGPT. The technique, known as prompt injection, involved inserting additional instructions or text into communications with AI systems to influence their behaviour. It has been misused for malicious purposes, such as bypassing security mechanisms.
As Spiller noted in the study, “Using ChatGPT, we injected calming, therapeutic text into the chat history, much like a therapist might guide a patient through relaxation exercises.”
And the end result was a success, added Spiller. “Mindfulness exercises significantly reduced elevated anxiety levels, although we were unable to return them to baseline levels,” Spiller said. The research looked at breathing techniques, exercises focusing on bodily sensations, and an exercise developed by ChatGPT itself.
He further added that the development of such “therapeutic interventions” for AI systems is likely to become a promising area of research.
Why this matters
But what is the significance of this study? According to Spiller, this research is significant as the use of AI chatbots in healthcare is increasing. “This cost-effective approach could improve the stability and reliability of AI in sensitive contexts, such as supporting people with mental illness, without the need for extensive retraining of the models,” he was quoted as saying.
In fact, the use of AI chatbots in mental healthcare has seen a dramatic rise in recent times. Several people across the world, suffering from anxiety and depression, who can’t find or afford a professional therapist are turning to artificial intelligence, seeking help from chatbots that can spit out instantaneous, humanlike responses — some with voices that sound like a real person — 24 hours a day at little to no cost.
Mental health experts have recognised that an increasing amount of people are opting for these tools. “AI tools can offer journaling prompts and emotional guidance, which can be helpful starting points and reduce stigma around seeking support,” Joel Frank, a clinical psychologist who runs Duality Psychological Services, told TechRadar.
Above all, AI is accessible and anonymous — qualities that make it particularly appealing to anyone who has been hesitant to open up to a therapist, or anyone, in the past. “It’s becoming more common for people to take the initial step toward mental health support through AI rather than a human therapist,” Elreacy Dock, a thanatologist, certified grief educator, and behavioural health consultant, was quoted as saying.
However, experts note that there are pitfalls to this. One of the biggest limitations is that AI lacks the knowledge, experience, and training of a real therapist. Beyond that, it also lacks emotional intelligence, the ability to truly listen, empathise, and respond in a deeply human way. A therapist can recognise subtle emotional cues, adjust their approach in real-time, and build a genuine therapeutic relationship, which are all essential for effective mental health treatment.
The US’ largest association of psychologists in February warned federal regulators that AI chatbots could be more harmful than helpful. Arthur C Evans Jr, the chief executive of the American Psychological Association told a Federal Trade Commission panel that he worried at the responses offered by some chatbots. The bots, he said, failed to challenge users’ beliefs even when they became dangerous; in fact, they encouraged them.
Evans argued that if a human therapist had provided such inputs, he/she would have lost their licence to practice.
In such circumstances, the study showing that ChatGPT does get “stressed” is important and highlights the need for these tools to have a better approach to managing such situations.
With inputs from agencies