Alan Turing, one of the fathers of modern computing and precursor of artificial intelligence, argued that in the future there would be machines advanced enough that their actions would be difficult to distinguish from humans.
This idea has been extensively covered in literature and brought to the big screen on numerous occasions. Examples include the films The Stepford Wives , based on Rosemary’s Baby Ira Levin’s 1972 novel of the same name ; Morgan ; The Machine and I am Mother , and the endearing Data from Star Trek: The Next Generation
There are many questions that arise if we reflect a little on it. Will we humans be able to believe that machine that imitates human behavior and feel it as real? What can be the consequences of this new adaptation to the reality of the myth of Prometheus?
Alan Turing devised an experiment to assess whether a machine – today an artificial intelligence – has the ability to perfectly imitate a human’s response to a question. Initially he called it “imitation game” ( The imitation game ), although it was later known as the Turing test.
In the original Turing test, a person interacts with two hidden interlocutors, a human and a machine. The person, acting as a judge, asks the same questions to both and, if he is not able to identify which of the two is the machine, it is considered that it has passed the test and that, therefore, its “intelligence” is comparable to the human.
Today, the test format has varied quantitatively; usually the conversation takes place between a number of judges and a chatbot.
Despite the fact that this test has been criticized by some scientists -they consider that it has important limitations, such as the fact that it only analyzes the ability to communicate and not other aspects that are also part of human intelligence-, the truth is that it constitutes a cornerstone in the field of artificial intelligence and that it has taken more than 70 years for a machine to overcome it.
Today, however, the test would clearly be surpassed by some of the recent great models of natural language processing: it has already been by Google’s LaMDA and, undoubtedly, now it would be (for some it has been) by ChatGPT .
To be human or to deceive humans?
But if we reflect on the test, we will realize that it is built on a deception. The purpose of the machine is not to think like a human, but to deceive it, making it believe that it is facing another human.
Precisely this approach to the Turing test has been taken to the extreme in Alex Garland’s film ExMachina , where the machine –in this case a human-shaped robot– passes the test thanks to the fact that it manages to make its interlocutor believe that it has even become conscious. .
Deception is the basis of the best-known generative model of deep learning : generative adversarial networks (Generative Adversarial Networks, GAN), of which MidJourney or Dall-e are famous examples.
GANs are capable of generating human faces indistinguishable from real ones by humans themselves. But artificial intelligence has not learned what a human face looks like, but what a human face must look like so that we cannot distinguish it from an artificial one. For this same reason, it is disputed that, in 2014, a chatbot called Eugene Goostman would have passed the Turing test in a contest organized at the University of Reading (United Kingdom), in which it managed to convince 30% of the judges using a programming that had the sole purpose of making them believe that they were in front of a 13-year-old Ukrainian boy.
If we focus on ChatGPT , we must not forget that it is prepared to always respond. Another different thing will be the success in the answer. Some of them, although covered with an almost unquestionable appearance of truth, are incorrect .
ChatGPT is capable of imitating our language so syntactically and grammatically perfect that it gives us a feeling of infinite wisdom a priori . At least its creators are not cheating : ChatGPT admits that it can fail. We are the ones who deceive ourselves into believing that it is going to provide us with the solution to everything we ask.
Similarities to psychopathy
It seems that in these cases artificial intelligence acts in a similar way to how a psychopath would. Psychopathy is usually defined as a personality disorder that characterizes those people who, lacking feelings and emotions, are incapable of creating bonds of affection or empathy with their peers .
The psychopath has come to be described as the personification of evil without remorse, since in his behavior he resorts to manipulation and deception without our realizing it . Thanks to their cold blood, psychopaths are extremely convincing.
The main characteristic of a psychopath is that he knows exactly what he is doing and what the social rules are . They dominate the situation and are able to distinguish between good and evil, but they do not feel guilty for their actions.
As if it were a psychopath, artificial intelligence will be able to imitate certain human behaviors and characteristics such as dialogue or the expression of emotions so perfectly that, without a doubt, it will generate feelings in us. We will be able to take affection for her, to believe her unconditionally or even to fall in love with her, as happens in Her de ella, suffering, as in the film, the devastating consequences of discovering the truth.
That artificial intelligence adopts characteristics of psychopathic behavior can be terrifying to us, but we must not let the future scare us. Until now humans have been able to set legal limits to our progress to avoid or impede the development of what we did not consider ethical .
Two years after the first cloned mammal, Dolly the sheep, entered the world, the Council of Europe banned the cloning of humans. It was January 12, 1998, and the protocol was signed that same day by nineteen countries. Days ago Italy did the same with ChatGPT (although it has legalized it again) and, in Spain, the Spanish Agency for Data Protection has it in its sights .
Perhaps we are still far from true artificial intelligence. In any case, the key to everything is that we are the ones who control the technology and not the technology that controls us. Or, in other words, it is not the law that must adapt to technology, but rather the technology that must adapt to the law. Although, as it should be remembered, the law always comes later.
Author Bios: María Isabel Montserrat Sánchez-Escribano is a Hired Professor of Criminal Law, Francisco Jose Perales is Professor of Computer Science and AI and Javier Varona is a Full Professor of Computer Science and Artificial Intelligence University all at the University of the Balearic Islands
This article has been written in collaboration with the computer science and artificial intelligence expert Melchor Palou Sánchez-Escribano.