ChatGPT’s false empathy

Share:

Anthropomorphism is the tendency to attribute human characteristics to non-human entities, such as machines or animals. In the case of chatbots , such as ChatGPT, Gemini or Copilot, this phenomenon occurs when they imitate human language to communicate with us. Not only do they use understandable and familiar words and phrases, but they also adopt human-specific communication patterns . In this way, they manage to maintain contextual and coherent conversations and can even show humor and empathy.

This design of machine language seeks to make interaction with machines natural, fluid and accessible. This facilitates their use in different applications, from customer service to education and entertainment.

Until now, language was considered to be an exclusive human capacity. However, advances in generative artificial intelligence are leading us to reconsider this idea.

One of the most distinctive features of human language is subjectivity, which manifests itself in the choice of emotionally charged words and in the expression of personal opinions. It also appears in the evaluation of events and in the use of contextual and cultural elements.

The chatbot ELIZA, published in 1966, already simulated human behavior, taking on the role of a psychologist in its conversation with the user. Wikimedia Commons. , CC BY

The impact of the first person

When a machine simulates being human in communication, important ethical and psychological implications arise . These interactions can influence how we perceive and relate to technology.

A typical feature of human subjectivity in language is the use of personal pronouns. The first person (such as “I” and “we”) allows one to express one’s own thoughts and experiences. The second person (such as “you” and “you all”) involves the interlocutor, building a relationship between the participants of the conversation. In human interaction, this is called intersubjectivity.

Let’s consider an example.

  • User: “How do I prioritize which items to keep, donate, or throw away during the organizing process?”
  • Chatbot : “Great question! Organizing your belongings can be overwhelming, but with a clear strategy, you can make easier decisions. Here are some ideas for prioritizing what to keep, donate, or throw away.”

The chatbot uses the first person implicitly. Although the “I” does not appear, the sender adopts an advisor or guide position. For example, in the sentence “Here are some ideas,” the verb “I present” is in the first person.

This suggests that the chatbot takes on the role of the person providing help. Thus, the user perceives a personal approach, even though the “I” is not explicitly used. In addition, the use of “I present” reinforces the image of the sender as someone who offers something valuable.

Using the second person

The “tú” (and forms like “te” and “tus”) are used to address the user directly. This is seen in several parts of the text, such as in the sentences: “Organizing your belongings can be overwhelming” and “with a clear strategy, you can make easier decisions.”

By speaking to the reader in a personal way, the chatbot seeks to make the reader feel like an active part of the advice. This type of language is common in texts that seek to directly involve the other person.

Other elements in the interaction, such as “Great question!”, not only positively evaluate the user’s query, but also encourage their participation. Similarly, expressions such as “it can be overwhelming” suggest a shared experience, creating an illusion of empathy by recognizing the user’s possible emotions.

Effects of artificial empathy

The chatbot ’s use of the first person simulates consciousness and seeks to create an illusion of empathy. By adopting a helper position and using the second person, it engages the user and reinforces the perception of closeness. This combination creates a conversation that feels more human and practical, suitable for advice, even if the empathy comes from an algorithm , not from real understanding.

Getting used to interacting with non-conscious entities that simulate identity and personality can have long-term effects. These interactions can influence aspects of our personal, social and cultural lives.

As these technologies improve, distinguishing between a conversation with a person and one with an artificial intelligence could become difficult.

This blurring of the boundaries between human and artificial affects how we understand authenticity, empathy, and conscious presence in communication. We might even end up treating artificial intelligences as if they were conscious beings, generating confusion about their real capabilities.

Uncomfortable talking to humans

Interactions with machines may also change our expectations about human relationships. As we become accustomed to quick, perfect, conflict-free interactions, we may become more frustrated in our relationships with people.

Human relationships are marked by emotions, misunderstandings and complexity. This, in the long term, could diminish our patience and ability to manage conflicts and accept natural imperfections in interpersonal interactions.

Furthermore, prolonged exposure to entities that simulate humanity raises ethical and philosophical dilemmas. By attributing human qualities to them, such as the ability to feel or have intentions, we might begin to question the value of conscious life in the face of perfect simulation. This could open up debates about robot rights and the value of human consciousness .

Interacting with non-conscious entities that mimic human identity can alter our perception of communication, relationships and identity. Although these technologies offer advantages in terms of efficiency, it is essential to be aware of their limits and the possible impacts on the way we relate, both with machines and with each other.

Author Bio: Christian Augusto Gonzalez Arias is a Researcher at the University of Santiago de Compostela

Tags: