From Wikipedia to AI, how can we help students understand how digital tools work?

Share:

Like Wikipedia before it, generative artificial intelligence tools are subject to misunderstanding and prejudice among students, even teachers. How can we help them better understand how these digital tools work and avoid anthropomorphism? Would the use of metaphors be a fruitful avenue?


Much of the current discourse on the use of generative artificial intelligence (GAI) in education is similar to that which has long accompanied the use of Wikipedia.

First, the lack of confidence that can be placed in them, accompanied by a list of errors that can be found here and there. Then, an accusation of laziness against users by misusing them without rewriting or verification. Finally, a variously based denigration of their quality and an initial desire to prohibit their use, based in fact on the problem, not always well stated, that their use calls into question the work and assessments required of students.

The stated similarities, however, hide important differences between Wikipedia and AGIs, particularly in the understanding or misunderstanding of their functioning and the implications of this misunderstanding. Yet, the circulation of metaphors, both for Wikipedia and for AGIs, can facilitate more diverse and interesting uses.

Wikipedia: Students are very ignorant of how it works

For twenty years, I’ve been surveying master’s students about their knowledge of Wikipedia and how it’s used. For most of them, the project’s founding principles , although there are only five of them—encyclopedism, neutrality of point of view, free license, community etiquette, and flexibility of rules—are completely unknown. Worse, the possibilities visible in the standard interface, such as languages ​​, history , discussion , have never been identified.

Although they are regular users, they have not had the curiosity to explore what Wikipedia has to offer. They believe they have a general understanding of how it works, which does not allow them to develop in-depth uses of a multilingual encyclopedic project that reveals its collective writing process.

They subscribe to a kind of simplistic maxim: “Anyone can contribute to an article (meaning anyone, uneducated or malicious people), THEREFORE it cannot be of good quality.”

How can such a statement be refuted? From a theoretical point of view, it is true; content (well, almost all of it) can be constantly modified. But from a practical point of view, few people engage in destructive behavior, as Wikipedia’s organization allows for the rapid detection of “inappropriate” interventions (with automatic alert systems, robots, etc.). The articles are, for the most part, stable and of good quality. Moreover, as early as 2004, studies revealed that the error rate in Wikipedia articles was no higher than in major encyclopedias.

This misunderstanding has consequences that Wikipedia wants to guard against. In its general disclaimer , it states that it “does not guarantee the validity, accuracy, completeness, or relevance of the information contained on its site.” This leads to a kind of commonplace for use: “Wikipedia should be a starting point for research, not a destination, and it is advisable to independently confirm any facts presented.” Wise advice. But when students are asked if they follow it, the answer is… no.

Generative AI: opaque operation and hallucinations

Regarding artificial intelligence, we have moved from AI as a scientific field to AI, entities with ill-defined contours, capable of rapidly producing texts, images, videos, etc. Personal or collective productivity tools, their limits and harms are known and shared:

  • Constitutive biases and hallucinations: documents from the past, discriminations, hallucinations (see the famous 6-fingered hands or the invention of false references), characteristics inherent to their functioning…
  • A lack of understanding: well expressed by the metaphor of the stochastic parrot , an illusion of understanding reinforced by a discourse in an anthropomorphic register , no overall control, lack of explicability (although this is improving)…
  • A very variable quality of responses which can depend on the cost paid by the user, which raises questions about what will happen in the future, with the need to generate sufficient energy (ecological problem).

Learning how to use them is a goal that schools must take on. But how? For some, it’s about showing their inner workings. This has rarely been done for Wikipedia, as users believe they know it. It’s proposed for AGIs. However, the gap between internal descriptions, attempting to explain how a large language model generates series of tokens , and the implementations users encounter are so far removed that this doesn’t seem like a good solution.

Metaphors to question digital uses

Proposing a series of metaphors helps us better understand both certain operating principles and the ways in which generative artificial intelligence is used. This can also help us resist potentially dangerous anthropomorphization processes. Talking about “Swiss army knives,” “stochastic parrots,” “drunken interns,” or “supreme lords” leads to different understandings of interaction with AGIs.

Regarding Wikipedia, there is a lack of widespread metaphors, except for somewhat general references like “a cathedral under constant construction.” Asking ChatGPT what images it would suggest yields others like “digital palimpsest,” “the world’s largest encyclopedia… written in pencil,” or “a canary in the coal mine for digital trust!” Some with explanations that explain how Wikipedia works:

“The Living Genome of World Culture: An Encyclopedia as a Living, Evolving, and Environmentally Sensitive Organism.
The Shadow Theatre of Neutrality: A Subtle Critique of Editorial Neutrality, Never Fully Achieved.”

All these metaphors help to construct a more interesting and critical vision of Wikipedia. This is also the case for AGIs. Thus, when asked to illustrate cases of hallucination by generative AI with metaphors, ChatGPT demonstrates a certain poetry. He evokes:

“an ornithologist who mistakes rare birds for mythological creatures”;

“a storyteller who invents details to make his story more captivating”;

“a surgeon who operates with poorly calibrated instruments, producing unexpected results.”

Note, however, that ChatGPT still describes itself as a human, who has intentions. It’s hard to escape anthropomorphization! With the same prompt, Claude.ia generated a presentation on the Internet , a non-anthropomorphic and more interesting synthesis. Shifting the interrogation of AGIs can allow us to better understand how they work.

Author Bio: Eric Bruillard is a Lecturer-rResearcher at Paris Cité University

Tags: