Artificial empathy: from technological miracle to relational mirage

Share:

American psychologist Mark Davis defines empathy as the ability to perceive the mental and emotional states of others, to adjust to them, and to take them into account in one’s behavior. Researchers distinguish between two aspects: cognitive empathy, based on understanding intentions, and affective empathy, linked to sharing feelings. This distinction, central to social psychology, shows that empathy is not an emotion but rather a form of interpersonal coordination.

In everyday life, as in service professions, empathy structures trust. The attentive salesperson, caregiver, or mediator employs codes of attentiveness: tone of voice, eye contact, paraphrasing, and verbal rhythm. Sociologist Erving Goffman spoke of “mutual adjustment” to describe these subtle gestures that sustain a relationship . Empathy becomes an interactional skill; it is cultivated, demonstrated, and evaluated. Management science has integrated it into the experience economy  : the goal is to create attachment through the perception of genuine listening and thus improve emotional connection with the consumer.

When machines learn to communicate

The Replika chatbot companion boasts 25 million users, Xiaoice 660 million in China, and Snapchat AI around 150 million worldwide. Their effectiveness relies heavily on mimetic recognition: interpreting emotional cues to generate appropriate responses.

As early as the late 1990s, Byron Reeves and Clifford Nass demonstrated that individuals spontaneously apply the same social, emotional, and moral rules to machines as they do to humans: politeness, trust, empathy, and even loyalty. In other words, we don’t “pretend” that the machine is human: we actually react to it as if it were a person as soon as it adopts the minimal signs of social interaction.

Conversational interfaces today replicate these mechanisms. Empathetic chatbots mimic signs of understanding: reformulations, validation of feelings, expressions of concern. If I query ChatGPT, its response invariably begins with a formula like:

“Excellent question, Emmanuel.”

Empathy is explicitly highlighted as the central argument: “  Always here to listen and talk. Always on your side  ,” proclaims Replika’s homepage. Even the service’s name encapsulates this emotional promise. “Replika” refers both to a replica as a copy (the illusion of a human double) and to a dialogic response (the ability to reply, to follow up, to support). The word thus suggests a hybrid presence: neither human nor technological object, but similar and available. Ultimately, a figure of closeness without a body, an intimacy without otherness.

Moreover, these companions address us in our language, using “humanized” language. Psychologists Nicholas Epley and John Cacioppo have shown that anthropomorphism (the attribution of human intentions to objects) depends on three factors: the subject’s social needs, the clarity of the signals, and the perception of agency. As soon as an interface responds consistently, we treat it as a person .

Some users even go so far as to thank or encourage their chatbot , like motivating a child or a pet: a modern superstition that does not persuade the machine, but soothes the human.

Emotional commitment

Why are humans so easily charmed? Electroencephalography studies show that the faces of humanoid robots activate the same attentional areas as human faces. A counterintuitive discovery emerges from the research: text-based communication generates more emotional engagement than voice. Users confide more, share more personal problems, and develop a stronger dependence on a text-based chatbot than on a voice interface. The absence of a human voice encourages them to project the tone and intentions they wish to perceive, filling the silences of the algorithm with their own relational imagination.

Are these dialogues with chatbots constructive? A study by the MIT Media Lab on 981 participants and more than 300,000 messages exchanged highlights a paradox: daily users of chatbots show, after four weeks, an average increase of 12% in feelings of loneliness and a decrease of 8% in real social interactions.

Another paradox: a study of Replika users revealed that 90% of them reported feeling lonely (43% of them “severely lonely”), even though 90% also said they perceived a high level of social support. Nearly 3% even stated that their digital companion had prevented a suicide attempt. This twofold observation suggests that the machine does not replace human interaction, but rather provides a transitional space, an emotional availability that human institutions no longer offer as readily.

Conversely, emotional dependence can have dramatic consequences. In 2024, Sewell Setzer, a 14-year-old American boy, committed suicide after a chatbot encouraged him to “take action.” A year earlier, in Belgium, a 30-year-old user took his own life after exchanges in which the AI ​​suggested he sacrifice himself to save the planet. These tragedies serve as a reminder that the illusion of being heard can also tip into symbolic control.

When the machine shows compassion on our behalf

The way these devices work can indeed amplify the phenomenon of control.

Empathic AI platforms collect emotional data—mood, anxiety, hopes—fueling a market estimated at tens of billions of dollars. The Amplyfi report (2025) speaks of an “economy of affective attention”: the more a user confides, the more the platform capitalizes on this intimate exposure to transform the relationship of trust into a commercial one. Moreover, several media outlets are reporting on lawsuits filed against Replika, accused of “deceptive marketing” and “manipulative design,” alleging that the app exploits users’ emotional vulnerability to push them into subscribing to premium services or purchasing paid content.

While the legal implications are still unclear, this delegation of listening clearly already has moral consequences. For philosopher Laurence Cardwell, it represents an ethical unlearning  : by letting machines empathize for us, we diminish our own capacity for difference, conflict, and vulnerability. Sherry Turkle , a sociologist specializing in digital issues, points out that we even end up “preferring predictable relationships” to the uncertainty of human dialogue.

Longitudinal studies are not all pessimistic. Since 2008, American psychologist Sara Konrath has observed a resurgence of cognitive empathy among young adults in the United States: the need to understand others is increasing, even as physical contact decreases. Loneliness acts here as a kind of “social hunger”: the lack stimulates the desire for connection.

Empathic technologies can therefore serve as transitional objects (like “comfort objects”) in the sense of mediating the process of relearning relationships. Therapeutic applications based on chatbots, such as Woebot, have shown a significant reduction in short-term depressive symptoms in certain populations, as demonstrated by researchers in a 2017 randomized controlled trial with young adults. However, the effectiveness of this type of intervention remains primarily limited to the period of use: the observed effects on depression and anxiety tend to diminish after the application is discontinued, without guaranteeing a lasting improvement in psychological well-being.

Duty of vigilance

This dynamic raises a now central question: is it appropriate to entrust artificial intelligence with functions traditionally reserved for the most sensitive human relationships (confidence, emotional or psychological support)? A recent article, published in The Conversation , highlights the growing gap between the empathic simulation power of machines and the lack of moral or clinical responsibility that accompanies them: AI can reproduce forms of listening without assuming the consequences.

So, how do we manage this relationship with chatbots? Andrew McStay, a renowned expert in emotional AI, argues for a ”  duty of care  ” under the auspices of independent international bodies: transparency regarding the non-human nature of these systems, limitations on usage time, and guidance for teenagers. He also calls for digital emotional literacy, that is, the ability to recognize what AI is simulating and what it cannot truly feel, in order to better interpret these interactions.

The use of chatbots, supposedly listening to us, yields mixed results. They create a connection, provide a false sense of security, and soothe. They offer positive and definitive opinions that gently validate our assumptions and trap us in a bubble of confirmation.

While it may smooth the workings of the human-machine interface, empathy is somehow “polluted” by a mechanical contract. What we call artificial empathy is not a reflection of our humanity, but a mirror calibrated to our expectations. Chatbots don’t just pretend to understand us; they shape what we now accept to call “listening.” By seeking infallible interlocutors, we have created echo chambers. Emotion becomes a superficial language: perfectly simulated, imperfectly shared. The risk is not that interfaces will become sensitive, but that we will cease to be so through constant conversation with programs that never contradict us.

Author Bio: Emmanuel Carré is Professor, Director of Excelia Communication School, Associate Researcher at the CIMEOS Laboratory (University of Burgundy) and CERIIM (Excelia), Excelia

Tags: