Since the launch in November 2022 of ChatGPT, the conversational agent developed by OpenAI, generative artificial intelligence seems to have invaded our lives. The ease and naturalness with which it’s possible to communicate with these tools are such that some users even turn them into true confidants. This is not without risks for mental health.
Large language models, in other words “generative” artificial intelligences such as ChatGPT, Claude and Perplexity, meet a wide range of needs, whether in terms of information research, assistance with thinking or solving various tasks; which explains the current explosion in their use in schools, universities, professional or leisure activities.
But another use for these conversational AIs is spreading at an impressive speed, particularly among young people : the equivalent of chats between friends, to pass the time, ask questions or exchange ideas, and above all, to confide in others as one would with a loved one. What could be the risks associated with these new uses?
A breeding ground for rapid adoption
Written conversations with artificial intelligence seem to have become commonplace very quickly. It should be noted that while there are AIs that use voice exchanges, they seem to be less used than text exchanges .
It must be said that for many years we have already been accustomed to communicating in writing without seeing our interlocutor, whether by SMS, email, chat or any other type of messaging. Since generative AIs reproduce the verbal expression of human beings remarkably well, the illusion of speaking to a real person is almost immediate, without the need for an avatar or any image simulating the other.
Immediately available at any time of day or night, always conversing in a friendly, even benevolent tone, trained to simulate empathy and endowed, if not with “intelligence”, then at least with seemingly infinite knowledge, AIs are in some ways ideal dialogue partners.
It is therefore not surprising that some have taken to the relationship game , and maintain ongoing and lasting exchanges with these substitute confidants or “best friends”. And this is all the more so since these conversations are “personalized”: the AIs in fact memorize previous exchanges to take them into account in their future responses.
Some platforms, such as Character.ai or Replika, also offer the ability to customize the virtual interlocutor as desired (name, appearance, emotional profile, skills, etc.), initially to simulate a digital role-playing game. This feature can only reinforce the feeling of proximity, or even emotional attachment, to the character thus created.
Just over a decade ago, director Spike Jonze made the film Her , a story about a man recovering from a bad breakup and the artificial intelligence that powers his computer’s operating system. Now, reality may have already met fiction for some generative AI users, who report having a “digital romance” with chatbots .
Practices that may not be without risk for the mental balance of certain people, particularly the youngest or most fragile.
Effects on mental health that remain to be measured
Today, in all countries (and probably far too late…), we are seeing the damage that the explosion in screen use has caused to the mental health of young people, particularly due to social networks.
Among other factors , one of the hypotheses (still controversial, but very credible) is that the disembodiment of virtual exchanges would disrupt the emotional development of adolescents and would favor the appearance of anxiety and depressive disorders .
Until now, however, exchanges conducted via social networks or digital messaging are still primarily conducted with human beings, even if we never meet some of our interlocutors in real life. What could be the consequences, on the mental balance (emotional, cognitive and relational) of intensive users, of these new modes of exchange with AI devoid of physical existence?
It is difficult to imagine them all, but it is easy to imagine that the effects could be particularly problematic among the most vulnerable people. These are precisely the people who are at risk of excessive use of these systems, as is well established with traditional social networks .
Late last year, the mother of a 14-year-old boy who committed suicide sued the executives of the platform Character.ai , whom she holds responsible for her son’s death. She said his actions were encouraged by the AI he was interacting with. In response to this tragedy, the platform’s executives announced that they had implemented new safety measures. Precautions around suicidal comments have been put in place, with advice to seek medical help if necessary.
An encounter between people in pain and intensive, poorly controlled use of conversational AI could also lead to a gradual withdrawal into oneself, due to exclusive relationships with the robot, and to a deleterious transformation of the relationship with others, the world and oneself.
We currently lack scientific observations to support this risk, but a recent study , involving more than 900 participants, shows a link between intensive conversations with a (voice) chatbot and feelings of loneliness, increased emotional dependence and reduced real social interactions.
While these results are preliminary, it appears essential and urgent to explore the potential effects of these new forms of interaction and, if necessary, to do everything possible to limit the possible complications of these uses.
Another fear: that talking to a “ghost” and being caught up in this illusion could also be a trigger for pseudo-psychotic states (loss of contact with reality or depersonalization, as can be found in schizophrenia ), or even truly delusional states, in people predisposed to these disorders.
Beyond these risks, intrinsic to the use of these technologies by certain people, the question of possible manipulation of content – and therefore of users – by malicious individuals also arises (even if this is not what we are seeing today), as does that of the security of personal and intimate data and their potential misuse.
AI and therapeutic interventions, another issue
Finally, let us emphasize that the points raised here do not concern the possible use of AI for truly therapeutic purposes, within the framework of automated psychotherapy programs scientifically developed by professionals and strictly supervised.
In France, programs of this type are not yet widely used or optimized. In addition to the fact that the economic model for such tools is difficult to find, their validation is complex. However, we can hope that, under numerous conditions guaranteeing their quality and safety of use, they will one day complement the resources available to therapists to help people in pain, or could be used as prevention tools.
The problem is that at present, some conversational AIs are already presenting themselves as therapeutic chatbots, without anyone really knowing how they were built: what psychotherapy models do they use? How are they monitored? and evaluated? If they were to prove to have flaws in their design, their use could constitute a major risk for vulnerable people who are not aware of the limits and possible abuses of such systems.
The greatest caution and vigilance are therefore required in the face of the ultra-rapid development of these new digital uses, which could constitute a real time bomb for mental health…
Author Bio: Antoine Pelissolo is Professor of Psychiatry, Inserm at the University of Paris-Est Créteil Val de Marne (UPEC)