Artificial intelligence and human thinking

Share:


Like any new technology, artificial intelligence is the subject of both hopes and fears and what it covers today presents major challenges (Villani et al., 2018). It also raises profound questions about our own humanity. Will the machine exceed the intelligence of the humans who conceived it? What will be the relation between what are called artificial intelligences and our human intelligences?

In a recent book (2017), Jean-Gabriel Ganascia answers the first question: he shows very simply that an artificial algorithmic intelligence develops (he speaks of “artificial technical intelligence”), the performances of which make our society actually upset, because we live at the time of the algorithms (Abiteboul-Dowek, 2017). However the idea of ​​a strong artificial intelligence that exceeds the intelligence of humans is not a true or false idea, it is a belief because it is not supported by scientific arguments. It turns out that it is in the interest of those who dominate the digital market to make us believe and media in search of an audience to relay this belief.

Critical thinking and creativity

So, before we worry about our competences with this presupposed world of artificial intelligences – say, AI to designate this belief, we will name the established scientific elements – we need to sharpen our key human skills: critical thinking and creativity. First of all, if we think critically, we should start by questioning ourselves about the AI ​​expression.

Is the term intelligence relevant to designate computer applications based in particular on machine learning? The goal of these algorithms is to develop systems that capture, process, and respond to (massive) information in ways that adapt to the context or data to maximize the chances of achieving the goals set for the system. .

This seemingly “intelligent” behavior has been created by humans and has limitations related on the one hand to the current human capacity to define effective machine learning systems, and on the other hand to the availability of massive data for that systems adapt. The fact is that these systems are more efficient than humans on very specific tasks such as the recognition of sounds, images or, recently, reading tests such as the Stanford Question Answering Data Set.

Does having a better reading test mean being able to understand, in a human and intelligent sense, the read text? The statistical ability to identify responses may seem intelligent, but there is no evidence that it is in the critical and creative sense of humans.

Skills of the 21st century. @margaridaromero

What should we learn today? 21st century skills take into account the pervasiveness of the digital world and the need to strengthen human development in terms of both attitudes (tolerance of ambiguity, tolerance for error and risk taking), know and technologies.

When in the 1950s Turing proposed a test based on a purely linguistic confrontation between a human and another agent, which could be a machine or another human, it does not target the intelligence of the machine, but the intelligence that we could assign him.

If humans judge that they are interacting with a human agent and not a machine, the artificial intelligence test is considered successful. And can one be satisfied with a good capacity of response to a human conversation to consider that a machine is intelligent?

Define human intelligence

If we consider intelligence as the capacity to learn (Beckmann, 2006) and learning as contextual adaptation (Piaget, 1953), it would be possible to consider systems capable of improving their adaptation to context from collecting and processing data as intelligent. However, if we view intelligence as the “ability to solve problems or create solutions that have value in a given sociocultural context” (Gardner and Hatch, 1989, p.5), under a diversified and dynamic approach, it is more difficult to consider that a system, so adaptive and so massively fed to data, can make a metacognitive judgment of its process and its products in relation to a given socio-cultural context.

Gardner and Hatch‘s definition of human intelligence is very close to that of creativity as a process of designing a solution that is considered new, innovative and relevant to the specific context of the problem-situation (Romero, Lille and Patino, 2017). ).

Intelligence is therefore not the ability to perform according to pre-established or predictable rules (including with mechanisms of adaptation or machine learning on data), but rather the ability to create new by demonstrating a faculty sensitivity and adaptation to the socio-cultural context and intra-and inter-psychological empathy to the different actors. This involves understanding the human and socio-historical nature in order to be able to judge one’s own process and creation autonomously.

If we adopt this second critical and creative approach to intelligence, we should be cautious about using the term AI for solutions that “limit themselves” to adaptation according to pre-established mechanisms that can not engender self-reflexive judgment of value, nor of socio-cultural perspective.

Machine learning systems that are labeled AI can perform well on the basis of sophisticated models fed with massive data, but they are not “intelligent” in the critical and creative way of humans.

Thus, my phone can learn to recognize the words I dictate vocally, even if I have an accent that it will infer especially as I use the system. But for all that, to give it a real intelligence in the face of my vocal dictation is a subjective projection, that is, a belief.

Develop critical thinking

We can also question the “intelligence of AI” in relation to the critical thinking that characterizes human intelligence. As part of the #CoCreaTIC project, we define critical thinking as the ability to develop independent critical thinking, which allows the analysis of ideas, knowledge and processes related to a system of values ​​and judgments.

It is responsible thinking that is based on criteria and sensitive to the context and to others. On the other hand, if we think of algorithmic learning systems, and the politically incorrect results they have produced in the face of images and textual responses that can be labeled as discriminatory, we must neither fear nor condemn nor accept this result because it has no moral value. The most likely explanation is that by “learning” data from humans, the mechanism puts forward racist and sexist elements, there is no eigenvalue system. Here, this so-called AI does not have responsible thought, but exacerbates certain drifts that the human is able to produce but also to limit and correct by its criteria and sensitivity to others.

Here is an attempt to define critical thinking, itself critical, proposed by the national education, including in the form of an educational resource:

Une tentative de définition de l’esprit critique. Educsol,

In the #RapportVillani, critical thinking is evoked in the face of these technologies both in terms of ethical aspects and in relation to the need to develop “critical thinking” in education on these subjects.

On the other hand, the report highlights the importance of creativity in education as a way of preparing citizens for the challenges of what is made possible with these algorithms. Education based on digital technology, especially in critical, creative and participative approaches, can also help to develop a relationship with computers that allows citizens to demystify AI, to develop an ethical requirement, and to adopt an informed attitude (acceptance or not, of what will be used at the level of their personal, social or professional activities).

For these reasons, the development of computer thinking skills is also an asset that complements the development needs of critical and creative thinking in the digital world.

The lever of computer thinking

In 2006, Jeannette Wing named “computational thinking” the ability to use computer processes to solve problems in any field. Computer thinking is presented by Jeannette Wing as a set of universally applicable attitudes and knowledge, beyond the use of machines.

To develop it, learners (from kindergarten, and at all ages) can combine learning the concepts and computer processes that are the subject of “digital literacy” (object, attribute, method, pattern design, etc.) and a creative problem-solving approach using computer concepts and processes (Romero, Lepage and Lille, 2017).

Projects like Class’Code in France or CoCreaTic in Quebec have developed resources and a community to support this approach in which it is not a question of learning “coding” (in the sense of coding with a computer language) not to not, but to solve problems creatively and sensitive to the context of the problem.

In other words, going beyond coding allows you to anchor yourself in a broader approach to creative programming. It engages learners because it is a critical and creative problem-solving process that uses computer concepts and processes.

It is not a question of coding to code, or of writing lines of code one after the other, but of developing a complex problem solving approach that engages in a reflexive and empathic analysis of the situation, its representation and the operationalization of a solution that benefits from metacognitive strategies related to computer thinking.

The development of a critical and creative approach to digital through computer thinking, allows learners to move beyond a user posture that could perceive AI as a full black box mysteries, dangers or unlimited hopes.

Understanding issues of problem analysis in relation to problem situations rooted in specific sociocultural contexts (for example, migration issues) is a way of seeing computer science as both a science and a technology that will allow , from the limits and constraints of our modelizations of a problem, to try to give answers which will feed on more and more massive data, without being able to be regarded as relevant or of value without the commitment of the human judgment.

For an education that allows you to live in the digital age

As reported by #RapportVillani, we need to face the emergence of AI from a more critical and creative education.

But we also need a more computer-oriented approach to digital literacy so that citizens (young and old) can understand the human factor in the modeling and creation of artificial systems, the basic functioning algorithms and machine learning or the limits of AI in the judgment necessary to consider the value of the solutions produced by the algorithms.

For enlightened citizenship in the digital age, we need to continue to sharpen our critical thinking, creative, collaborative problem solving while adding a new chord to our bow: the development of computer thinking

Author Bio: Margarida Romero is an Associate professor at Laval University, and director of the LINE laboratory of the ESPE de Nice at the Côte d’Azur University., Université Laval

Tags: