What test to do to an artificial intelligence to discover that it is not human

Share:

For hundreds of years, human beings have studied and tried to figure out what separates them from animals. Biology, sociology, anthropology and even philosophy are nourished by this existential question. Even the law, where it was established that certain groups of animals and in certain circumstances can be considered ” legal person “.

Will artificial intelligence have rights, then? Will he have the right to… life?

From the hypersonic development of artificial intelligence, there is a new element, perhaps the fifth element , which is made neither of earth, nor of fire, nor of air, nor of water. It is the anti-life, the artificial intelligence that forces humanity to confront a superpower of its own creation.

Artificial intelligences pass the Turing test or Turing test (the classic tool for evaluating a machine’s ability to exhibit intelligent behavior), and they do it without blinking an eye.

Spontaneous generation

One of the notable aspects that separate us humans from artificial intelligence is the spontaneous generation of actions and knowledge. Momentum.

The human being is a spontaneous creator of everything. A person can wake up one day and imagine an idea, a story or a poem , a creative thought. From personal history, the human being creates new knowledge, new stories and new experiences.

There is no artificial intelligence that generates knowledge or performs actions spontaneously.

In an article published in the journal Nature ), the scientists from the University of Zaragoza  Miguel Aguilera and Manuel Bedia concluded that it is possible to reach an intelligence that generates mechanisms to adapt to circumstances. This might resemble spontaneous action, but it is far from being an act of will. Every action carried out by an artificial intelligence is designed and programmed by a person.

Improvising in a jazz band will remain a human privilege.

The rule of ethics

This brings us to the second big difference: ethics. Artificial intelligence and machines do not have ethics per se , they must be inculcated. They only follow pre-established parameters, clear and precise rules of what they must do.

The human being has a regulation (Constitution, laws, religion, etc.) of what they should do, and they are also clear about what they should not do. But ethics is more than a regulation, it goes beyond a guide. Ethics is nothing more and nothing less than the discernment between good and evil . It is so important in our species that babies as young as five months have been found to already be making moral judgments and acting on them .

Those who do have ethics are the people who program the machines and artificial intelligences. A machine is not good or bad. It is effective. It does what it is ordered to do and what it was programmed to do. Although ethics can certainly be programmed. The physicist José Ignacio Latorre explains it in his work Ethics for Machines . Vaticina Latorre: “Artificial intelligence will sit in the Council of Ministers”

Today, ChatGPT is programmed not to broadcast sensitive content and does not provide access to the Deep Web . Thus, one can program based on ideas of being and should be. However, as time passes and ethical parameters change, they must be corrected so that the normative basis of artificial intelligence correlates with that of the human being.

The intention can only be human

Another important aspect is intention, and the intention of human action is intrinsically related to morality.

In her book Intention , the philosopher Elizabeth Anscombe argues that intention cannot be reduced to mere desires or internal psychological states. Anscombe argues that intention is an essential characteristic of action and that it is intrinsically related to moral responsibility. So you can’t separate the intention from the action itself when determining whether an act is morally right or wrong. Elizabeth Anscombe criticizes ethical theories that focus solely on the consequences of an action and do not consider the intention that anticipates them.

Lacking ethics and morality, artificial intelligence lacks intention. The intention remains circumscribed to the programmer.

Each of these three aspects discussed up to here requires rivers of ink to be able to achieve an understanding.

No regrets or psychological problems

It is almost provocative to ask what are the differences and not what are the similarities.

The differences are clear. AIs have no experiences. They have no history. They have no psychology or psychological problems. They have no remorse for their actions (a fundamental aspect of the ethics and morals section). They do not love nor are they loved. They do not suffer or feel pain. They have no opinion of their own, because nothing is their own.

If ChatGPT goes out of date (I doubt it) and is not consulted, its existence is useless. It only exists if it is useful to the human being. It has no identity. Their identity is a human construction.

AI can also be destructive . It can lead not only to the end of millions of jobs around the world, but also to a tiny position in the productive world, without getting into sci-fi apocalyptic speculations.

After all, it depends on the human being himself. It is in our hands to use them as a constructive or destructive tool.

But, in case someone doubts its nature in the near future, let us include a trap in its synthetic soul, a wink that, when necessary, reminds us that we are dealing with a fifth element, a non-human.

Author Bio: Augustin Joel Fernandes Cabal is a Predoctoral Researcher in Philosophy at the University of Santiago de Compostela

Tags: