Why is it not urgent to incorporate artificial intelligence into teaching?

Share:

The rise of generative artificial intelligence has inevitably affected education. On the one hand, there are all the problems related to the inappropriate use of this technology (for example, to do assignments). On the other hand, there are teachers who have another legitimate concern: shouldn’t I already be using it in my classes?

The answer to this question, especially when it comes to students’ use of artificial intelligence, is simple: “No. There is no rush.” And there are many reasons not to rush. We explain them below.

The reasons of history

We might think that the following statement has been made by some expert technologist in the educational field referring to the consequences of using tablets or artificial intelligence in the classroom:

“Very soon, books will be considered obsolete objects in schools.”

The reality is that these statements were made in 1913 by Sir Thomas Edison , and he was referring to the effects that cinema would have on education.

The same has historically been true of other technologies. Benjamin Darrow, founder and director of a school, stated in his book “Radio, the assistant teacher” in 1932:

“Radio will make the services of the best teachers available to people.”

Lyndon Johnson, the thirty-fifth president of the United States of America, stated in 1968 :

“Thanks to television, students learn twice as fast as before and retain what they have learned.”

When a disruptive and novel technology appears, people quickly think of its application in the field of education. In this sense, the American historian Larry Cuban identified a pattern in this application consisting of four phases: euphoria, scientific credibility, disappointment and blame.

From euphoria to disenchantment

In the euphoria phase, both the companies that design and market the technology and those identified as “evangelists” (people related or not to education, although often related to the aforementioned technology companies), proclaim its supposed pedagogical benefits.

In the scientific credibility phase, articles are published and studies are sought whose results support the pedagogical benefits that said technology supposedly provides.

The disappointment comes when it turns out that there is no such benefit (or at least not as promised).

In the fourth and final phase, the person or people responsible for selling us this technology as a panacea are sought. Regardless of who is decided to be at fault, the truth is that those who pay the price for having been hasty and mistaken are the students.

As an epilogue to Cuban’s theory and based on Gartner’s “Hype Cycle” (with which Cuban’s theory has certain similarities), two things can happen to these technologies: either they fall into oblivion (as far as the educational field is concerned), or some really useful, concrete and realistic use cases are identified, which end up finding their place within the educational field. While, for example, LMS (learning management systems) have been integrated into the educational field and are used in a multitude of institutions at all levels, other technologies such as blockchain have not yet taken off.

From mobile devices to artificial intelligence

There are currently two such cycles (in different phases) that overlap, one regarding mobile technologies, which have already reached the third and fourth phases, and another regarding the use of AI, which is in its early stages.

In different provinces of Spain , the use of mobile phones in the classroom has begun to be banned and the use of laptops is beginning to be questioned. A little over a year ago, Sweden proposed modifying its digital plan to reintroduce books (just the opposite of what Thomas Edison proclaimed a little over a century ago, although using a different technology) due to the drop of 11 points in a test called PIRLS (which could be translated as Progress in International Reading Comprehension Study).

As with the historical examples mentioned above, this technology has been introduced into a multitude of digital school plans without having adequately tested its effectiveness. And it is now being proven that it does not fulfill what others promised of it.

The second cycle we are experiencing is the one that has to do with artificial intelligence. Companies like OpenAI, Microsoft or Google, and different types of evangelists, including YouTubers , hobbyists and, of course, other companies related to this technology, encourage us to introduce it into the classroom. An example of this is Sal Khan’s statements in an interview with Microsoft , where the founder of Khan Academy states that “if you don’t use it at least a little, you’re going to have problems in a couple of years.” It is that first phase of euphoria in which the brilliance of technology can blind the discernment of some.

The reality is quite different: there is no evidence that the widespread use of artificial intelligence in the classroom improves student learning .

It is a different matter for a teacher to find a specific use for a particular system that benefits students (not because he or she imagines it or wants it, but because the improvement actually occurs). But this requires time, discernment and results. And it happens with any technology, not just AI.

Caution versus urgency

In addition to the historical cycle that usually affects technologies in their application to teaching, there are other reasons why there is no rush to implement artificial intelligence in education.

One is the delegation of skills. When we delegate a skill to a technology, that skill is obviously affected: by delegating numerical calculation to the calculator, we have lost calculation speed . In exchange, we can do much more complex calculations with 100% accuracy.

Sometimes it pays off. But other times it doesn’t. What skill will my students miss out on by using a particular AI app or system? Maybe none. Or maybe it does, and then AI might not be the right tool.

Another reason for caution is related to data. AI used today is fundamentally based on the massive use of training data. When we interact with chatGPT and other language models, those interactions are often used to train those models. Even if we (or the students) use personal data. Are we going to expose, even unintentionally, the privacy of our students?

Finally, there are the biases, errors and inaccuracies produced by language models. If education must be based on truth and the educational system is a sensitive system because the education of future generations depends on it, we should not introduce a tool that fails (the so-called “hallucinations” are inevitable due to the probabilistic nature of language models).

A hammer in search of nails

Psychologist Abraham Maslow said, referring to failure in problem solving, that when you have a hammer, everything looks like a nail. And today, artificial intelligence is the hammer of the moment, even in cases where it is more of a problem than a solution, such as in the area of ​​sustainability.

In the educational field there are problems (as in any other) whose nature and solution are not technological but require a different approach or a different kind of solution. AI or any other technology should only be used if it provides a (real) benefit to students. The fact that it is used outside the classroom, or that it is fashionable, should not justify its use.

Author Bio: Enrique Estelles Arolas is Professor and researcher of new technologies applied to education and collective intelligence at the Catholic University of Valencia

Tags: