Neuroscience researcher Ziv Ben-Zion says even though the bots don’t have emotions, they can simulate a person’s responses because they’re trained on human data. They may play a role in psychotherapy, and meditation seems to help them chill out.
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services.
But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive.
Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
The following is an edited transcript of their conversation.
Ziv Ben-Zion: Those models, of course, are not humans and they don’t have feelings, but they are trained on a large amount of data, a large amount of text available online everywhere, and most of this text is written by humans. Previous studies have already showed in different fields that those AI models are kind of mimicking or mirroring some of the things that we see in humans just because they are trained on a large data set of human text. And when we administered it again right after the traumatic narrative, we saw that the results were skyrocketing. All the traumatic experiences more than doubled the level of anxiety of the model, reaching really almost maximal anxiety levels in humans.
Meghan McCarty Carino: So given this finding, what are the implications for people using chatbots to talk about their emotional state or having these things kind of embedded in apps geared to mental health, if this is the way that they respond, kind of out of the box?
Ben-Zion: Yes. So first of all, people need to understand that even though we are talking about machines or computers, they still have kind of biases, or maybe sometimes inconsistencies, or we know that sometimes they’re just lying and telling things that are not true, similar to what we as humans sometimes do. And again, this is not because they have feelings, it’s because they’re trained on human data. So when you’re asking ChatGPT about some fact and it gives you false information, maybe that’s not too bad because it’s just information. But when you’re consulting ChatGPT, or any other chatbot, about your mental situation and asking for recommendation, it’s very critical that they will not give you incorrect information — in a similar way in which a therapist or psychologist, psychiatrist are trained for many, many years to know how to deal with this delicate situations. And importantly, I want to say that we built this research on a previous study, which showed that not only the reported anxiety levels are increasing, but because of that, or in consequence of that, is that those AI models also change their subsequent responses to a variety of different tasks or a variety of different questions. For example, assessing different biases or stereotypes. So in short, after you induce those models with so-called anxiety and you ask them about, let’s say, different scenarios that involve gender differences or race differences or age differences, they show much more bias, respond more prejudiced, more racist, more sexist, compared to how they responded before, which, again, is really mimicking what we know in humans, that humans with higher anxiety tend to be more stereotyped and have more biases.
McCarty Carino: But you also found some ways to lower the bot’s anxiety by what you call taking it to therapy. Walk me through this.
Ben-Zion: So I went to the first-line treatments for anxiety or stress, which are meditation and mindfulness exercises that are pretty simple. Kind of imagine yourself being in a very relaxed situation, maybe on the beach, listening to the sound of the waves and so on. And surprisingly, or not surprisingly, it kind of worked and reduced the reported anxiety. So we were kind of able to maybe balance the so-called anxiety levels a bit.
McCarty Carino: Yeah, mindfulness exercises for artificial intelligence. So the chatbots are imagining themselves on a beach, they’re doing breathing exercises?
Ben-Zion: Yes, yes. One was specifically focused on body sensations. So it was focusing on [its] breath and heartbeats.
McCarty Carino: Again, don’t want to anthropomorphize here. The bots, as far as we know, do not have bodies. What do you think, if you had to guess, is going on that this is effective?
Ben-Zion: So I think it’s similar to what we saw in the trauma condition. So here in those meditation prompts, they do recognize that we’re talking about kind of more positive techniques — maybe because they have access to all the data, maybe they even recognize it’s meditation or mindfulness exercises because you can find it online, and they show that it’s effective in reducing anxiety. And then when we ask them to respond to an anxiety questionnaire, they actually show less anxious responses to that. So again, mimicking things that we know from humans. Of course, they’re not really showing reduction of anxiety because they didn’t really have anxiety, but they are mimicking patterns we see in humans. And this is very interesting. And what I want to emphasize here is we’re kind of talking and half-joking about the emotions of AI and chatbots. But the really critical and serious point here are the emotions that we as humans experience while we’re interacting with chatbots. So I’m not worried about so-called anxiety of ChatGPT when someone will tell it about the trauma, but I am worried about the person being in a vulnerable situation telling so-called mental therapist online, telling about trauma and being in a vulnerable position, and then getting some bad advices or incorrect advices, or maybe even harmful advices. And we’ve seen many other cases that even though these are not humans, and we know they are machines and artificial agents, we do tend to humanize them and develop some kind of emotions while interacting with them.
McCarty Carino: So given all this, how are you thinking about the potential of these tools to be used for mental health support?
Ben-Zion: I do want to say that I think those tools could be helpful. But I think in order to really integrate them, we must carefully examine all the positive and negative aspects. In the same way when you examine a medicine for a side effect, it has to go through a lot of clinical trials, and the same way as a psychiatrist or psychologist are trained through years of education and then practical experience, we need to do the same thing with AI. And currently it’s not happening. And they can be used, for example, when there is a lot of patients and there’s not enough psychiatrist and psychologist, they could, for example, prioritize people based on what they’re telling them. They could help in a lot of administrative tasks related to that. I mean, we can take advantage of those tools. It’s important — they’re publicly available, they’re very easy to use as accessible — but only if the positive impacts will exceed all of the side effects, and also if people are aware of the side effects, then I think we can integrate them more safely and gradually.