If intelligence is flexibility, then “artificial intelligence” is not intelligent at all

Share:

How ironic that some of my species think that the ultimate in human intelligence is to create technology that reproduces and replaces that intelligence, like a God creating humans in his image. Quite the contrary, I prefer to think that what we learn from artificial intelligence should inspire much more humility in our so-called super-powerful human brains. After all, what ChatGPT and the like have taught us so far is that it is very, very easy to produce something that passes for what many humans prize as the most distinctively Homo of their humanity: symbolic language.

I’m talking about the ability to put scribbles together into sequences that convey some meaning. ChatGPT and similar algorithms are trained from scratch, just like young human brains, which constantly generate sequences of letters (or sounds) through trial and error. These sequences are then checked against something that gives some feedback, be it a smile or just more sequences in response. Just that. Running on a machine or in a brain with enough units to keep a memory of what worked (and enough time to try and make mistakes), the silly algorithm becomes capable of generating sequences increasingly closer to those that occur in the databases. used in training – be it speech freely generated by surrounding humans, or all content posted on the internet by a certain date.

Everything that this algorithm generates, whether implemented in biology or machine, are new combinations of the database that fed it. Humans thus learn to speak the language they hear, whatever it may be. And ChatGPT thus learns to construct sentences and sequences of sentences, called “conversations”, only as eloquent or disturbing as the internet content that trained it.

The algorithm doesn’t know what it’s learning to do, and therefore does it in any language. There is “information” on which letter sequences are most likely to occur together with others in each language, but there is no value or utility in these sequences and, therefore, there is no real knowledge in what ChatGPT produces. Hence the so-called “hallucinations” , the sequences invented by ChatGPT by free association.

I protest. Hallucinations would be sequences that do not exist in the database, such as images generated by human brains without any connection with reality. What ChatGPT does, par excellence, is confabulation : creating new associations between sequences that are already part of your repertoire – just like amnesiac humans trying to explain why they put salt in their coffee. “It was, uh, a replication of an experiment done with rats at Princeton University in 2004, that’s it!”

And so ChatGPT took Humanity’s ultimate language to something that only requires a network capable of unsupervised learning; a database; and a lot of time and energy to run it, again and again, until it is ready to be used, with results that are never guaranteed, in school and more or less productive environments. Whether the content generated is factual or even transcends information and becomes knowledge is a question of the values ​​of those who use it – and values, indeed, are individual. But if the point is just to have language, then nothing uniquely human is necessary. Sorry, Chomski .

A Roomba robot vacuum cleaner: if it were truly intelligent, its algorithm would have already become more flexible to prevent it from getting stuck. Kārlis Dambrāns/Wikimedia Commons , CC BY

Being intelligent is another five hundred

Now, whether ChatGPT is intelligent, or even whether “artificial intelligence” is in fact intelligent, is another matter. A machine, or even an animal, being able to do something is not proof of intelligence, only of behavior , that is, any observable action, by my definition. Generating actions is what the brain does permanently – including the action of remaining still, standing or sitting. An algorithm or device that shows the way, translates text or vacuums the floor of the house without human supervision also has behavior, which can be quite complex, and who knows, it even has the memory to offer frequent addresses or map the limits of the floor on its own.

But intelligence, in my book, is behavioral flexibility , and intelligent is someone who has flexible behavior, something that goes far beyond adaptation or memory: behavior that expands future possibilities and acts in favor of your continued flexibility, proactively keeping doors open. Memory is the ability to remember and do the same thing next time. Flexibility and, therefore, intelligence is the ability to do things differently when reality, circumstances, or desires and values ​​change – and above all, to make what you want to happen happen.

In the case of vertebrate animals, behavioral flexibility is a product of the cerebral cortex, a richly connected network of neurons, capable of forming and changing associations according to their experiences. And, above all, to modify the actions generated depending on the past, and the values ​​already associated with simulations of the future. In principle, the more neurons this network has, the more flexibility it has and, therefore, the more intelligent it is.

And the greatest distinction of the human species , according to my own research, is that we are the animal with the largest number of neurons in the cerebral cortex: 16 billion , no less than double the runners-up, gorillas and orangutans tied with around 8 billion. Chimpanzees have between 6 and 7 billion cortical neurons; elephants, less than 6 billion; whales, by my count, no more than 3 or 4 billion; and macaws, parrots and monkeys somewhere between 1 and 3 billion – as many, in fact, as I estimate an adult Tyrannosaurus rex possessed.

The cover on my little robot vacuum cleaner has been getting stuck under the same piece of furniture in my room since it came out of the box. Your navigation of my home is a behavior programmed by somewhat simple algorithms. And he’s not smart, or his system would have already relaxed to prevent him from getting stuck.

Since it is noted that a functional algorithm is not a guarantee of intelligence, the expression “artificial intelligence” should be reserved for artificial (that is, non-biological) cognitive systems that are actually flexible. Still, they will not have human values, because they are not human. And this, to me, is the question that matters: how smart, by my definition, is it to leave decisions about our future in the hands of systems that don’t share our values?

Author Bio: Suzana Herculano-Houzel is Associate Professor of Psychology at Vanderbilt University

Tags: