Algorithms are automating many human actions and activating fear responses in us, which is nothing more than a primary emotion formed by biochemical algorithms whose adaptive function is protection. It is curious that automating causes us an emotional response that in turn is an automatism. But we tend to fear the unknown.
The emergence of artificial intelligence with tools such as ChatGPT generates insecurity, threat or uncertainty. It causes fear of change and losing control over our thinking and our decisions.
Who makes the decisions?
A first antidote for fear is knowledge. You don’t have to go to Silicon Valley to understand how algorithms or artificial intelligence work. We have the explanation within ourselves.
If we stop to think about how our body relates to the huge amount of information it receives consciously and unconsciously, we will realize that we do not attend to all information equally. Homo sapiens have a limited processing capacity. We have come this far by being effective, not by being perfect or exhaustive.
Slow and fast thinking
Daniel Kahneman , 2002 Nobel Prize winner in economics, explains in his bestseller Thinking, fast and slow how two thought systems coexist in humans. On the one hand, we have the system responsible for the controlled processes. It involves more attention and consumes more cognitive resources. To understand this article that we are reading, for example, we need this system. Learning a language, or swimming, would be other examples.
On the other hand, we have the automatic system. It is out of voluntary control and consumes few resources. It is activated when we see our son or daughter cross the street without looking or when we drive home from work.
One seems to make us progress or innovate and the other saves us in vital situations or takes care of tasks that are not relevant to us.
Both are required and do not work separately. This idea is key and is good news. For example, a swimmer who needs to automate the front crawl will need to deliberately practice the different movements and consciously integrate them. Sufficient practice will automate the entire technique.
In a competition you won’t be thinking about whether your right elbow comes up on the part of the recovery. In these stressful situations we are interested in automatisms taking more control and letting the attentional system rest.
How do we choose?
The problem that can occur when we face computer algorithms is that we are not clearly distinguishing who has to make certain decisions. I share an example to illustrate this idea.
If, at the end of a physical activity, we have an electronic device that dares to indicate the recovery time and even the degree of intensity to follow, we are allowing a series of computer algorithms to stupefy us .
We stop paying attention to the emotions that our body generates and that are the ones that can best guide us to determine the type of activity that is best for us at all times. Are these computer algorithms more precise than the biochemical algorithms that evolution itself has been in charge of perfecting?
Algorithms process a large amount of data, although many times they only serve to distract us or make us fall into a false sense of control when we have more information. Furthermore, not all data is equally valid in all circumstances. The laws of statistics are not infallible, especially since our organism and the environment change continuously.
In the educational field, the search for solutions to problems or decision-making based on large amounts of data can push us to take advantage of the help of algorithms or engineering thinking .
The set of mental processes that help to find automated solutions (with or without technology) to certain problems is known as computational thinking . This is one of the innovations incorporated –but not integrated– in the Spanish educational system with the arrival of the new Education Law: the LOMLOE.
It is a clear indicator of the current concern that exists in understanding how machines work or the power of engineering thought and the technological industry. But it is also an opportunity to understand ourselves.
Automating tasks generates great results and well-being. For thousands of years we have used a series of mental shortcuts (heuristics) that simplify complex cognitive problems and transform them into simple actions. If we use heuristics we save energy. They are activated when we do not have time, information or the capacity to process information. That is when an algorithm can take control and decide for us (and offer us, for example, false news ).
The ideal is to find a middle ground: not to renounce automatisms but also not to make an overuse that distances us from innovation .
The smartest thing is to explore new alternatives when we perceive that the context changes and our learning requires new strategies to be more efficient. Poorly used automation can lead us to inhibit processes of innovation and reflection, so necessary in the changing and uncertain modern world that we have.
When we criticize or are afraid of an algorithm, before judging we should ask ourselves at least the following questions: do I prefer and buy functionality to sell privacy? Who is in control of the decisions?
Some experts affirm that there is a paternalistic automation that directs decisions and human behavior through nudges , psychological mechanisms that affect decision-making that drink from our heuristics and biases. Through these “nudges”, people choose and opt without making use of the deliberate and conscious thought system.
In our example above, the algorithm pushes me to retrain based on data collected by a device. Is the algorithm more rational and efficient? And if it were, do we really want it to make our decisions?
Algorithms may get to know us better than we know ourselves and may automate many decisions. Ultimately though, we will still be in control of our decisions. An idea validated by the passage of time and which Epictetus , one of the most renowned Stoics, summarized as follows:
“From what exists, some things depend on us; others do not depend on us. Judgment, impulse, desire, rejection and, in a word, how much is our business depend on us. And the body, property, reputation, positions and, in a word, everything that is not our business do not depend on us.
(Epictetus, A Manual of Life , 3, cited by Pigliucci and Lopez in My Stoic Notebook ).
Knowing how we work will be more useful for us to find that balance between automation and innovation and, thus, decide who (attentional system, automatic or algorithm) makes which decisions.
Author Bio: Jose Luis Serrano is Professor of Educational Technology at the University of Murcia