ChatGPT: facing the artifices of AI, how media education can help students

Share:

Who has never heard of ChatGPT , this generative artificial intelligence, capable of responding with complex texts to queries launched by Internet users? The December 2022 release of this software designed by the OpenAI company sparked a multitude of articles, between visions of catastrophe and utopia, producing a media panic, as illustrated by the open letter of March 2023 calling for a moratorium in the development of this type of systems , signed by a thousand researchers.

As a study by the Columbia Journalism Review shows, the panic did not start in December 2022 with the event launched by OpenAI but in February 2023 with announcements from Microsoft and Google, each going there with their chatbot integrated into their engine. search engines (Bing Chat and Bard, respectively). Media coverage blurs the information, focusing more on the potential replacement of humans than on the real concentration of AI ownership in the hands of a few companies.

Like any media panic (the most recent being those on virtual reality and the metaverse ), its purpose and effect is to create a public debate allowing players other than those in the media and digital to seize it. For media and information literacy (MIL), the stakes are high in terms of social and school interactions, even if it is still too early to measure the consequences on teaching of these language models. automatically generating texts and images and making them available to the general public.

In parallel with regulatory political actions , the EMI allows citizens to protect themselves from the risks associated with the use of these tools, by developing their critical thinking and adopting appropriate and responsible use strategies. Algo-literacy , this subfield of MIL that considers what data does to the media, makes it possible to apply these reading keys to AI. Here are four directions in which MIL can help us navigate these chains of algorithmic interactions, from their productions to their audiences.

Taking into account the geopolitics of AI

It is the companies controlling search engines and therefore access to information, Google and Microsoft, that have the most to gain from the development of generative AI. They are organized, in the American style, as a duopoly, with a (false) challenger, OpenAI LP . It’s actually the commercial arm of the initially non-profit OpenAI lab (largely funded by Microsoft).

Another story can be told, especially by the media, of the incredible concentration of power and money by a very small number of Silicon Valley companies . They give themselves the monopoly of access to information and of all the productions resulting from it. They fuel head-on competition between the United States and China on the subject. The strategy of Google and Microsoft is indeed intended to pull the rug out from under the Chinese government, which does not hide its ambitions in the development of AI .

The option of a pause or a moratorium is a pipe dream, faced with what is the equivalent of an arms race. The inventors themselves, as repentant sorcerer’s apprentices, including Sam Altman, the general manager of OpenAI, proposed in May 2023 “AI governance” . But wouldn’t it be in the hope of not suffering the full brunt of government regulation that would elude them and put a damper on their commercial intentions? The European Union has anticipated by preparing an AI regulation to regulate the uses of this new digital development.

Question the quality of the texts and images provided

Not everything that is plausible is necessarily meaningful. The AI ​​that drives the ChatGPT software makes suggestions based on queries and they appear quickly… in a rather stylish and well-kept language! But this can generate errors, as understood, to his chagrin, a New York lawyer who had assembled a file riddled with false legal opinions and false legal citations.

So be wary of AI-generated pseudo-science. The contents offered may have biases because they come from the exploitation of huge databases. These include datasets with sources of all kinds… including social media! The latest free version of ChatGPT is based on data that stops at the beginning of 2022, so not really up to date on current events.

Many of these databases come from English-speaking countries, with the associated algorithmic biases. In fact, ChatGPT risks creating misinformation and lending itself to malicious uses or amplifying the beliefs of those who use it.

It is therefore to be used like any other instrument, like a dictionary with which to do research, work out a draft… without entrusting it with secrets and personal data. Asking it to produce its sources is good advice, but even that does not guarantee the absence of filters, the chatbot tending to produce a list of sources that look like quotes but are not all real references.

In addition, we must not forget the problems of copyright which will not be long in coming into action.

Beware of imaginaries around AI

The term artificial intelligence is not appropriate for what concerns pre-trained data processing (the meaning of the acronym GPT for generative pre-trained transformer ).

This anthropomorphism, which leads us to attribute thought, creativity and feelings to a non-human agent, is negative on two counts. It reminds us of all the anxiety-provoking myths that warn of the non-viability of any porosity between the living and the non-living, from the Golem to Frankenstein, with fears of the extinction of the human race. It serves the serene understanding of the real usefulness of these large-scale transformers. Science fiction does not help to understand science. And therefore to formulate ethical, economic and political benchmarks.

These imaginaries, however active they may be, must be demystified. The so-called “black box” of generative AI is rather simple in principle. Large-scale language models are algorithms trained to reproduce the codes of written (or visual) language. They crawl thousands of texts on the Internet and convert an input (a sequence of letters, for example) into an output (its prediction for the next letter).

What the algorithm generates, at very high speed, is a series of probabilities, which you can check by doing the same query again and seeing that your results are not the same. No magic there, no sensitivity either, even if the user has the feeling of having a “conversation”, another word from the human vocabulary.

The “black box” of generative AI works on rather simple principles. Shutterstock

And it can be fun, as shown by the BabyGPT AI created by the New York Times , working on small closed corpora, to show how to write in the style of Jane Austen , William Shakespeare or JK Rowling. Even ChatGPT isn’t fooled: when asked how he feels, he replies, very bluntly, that he’s not programmed for that.

Vary the tools

AI audiences, especially at school, must therefore develop knowledge and skills around the risks and opportunities of this kind of so-called conversational robot. In addition to understanding the mechanisms of automatic processing of information and disinformation, other precautions lend themselves to education:

  • beware of the monopoly of the online query, as targeted by Bing Chat and Google Bard, by competing with each other, therefore by regularly using several search engines;
  • requiring labels, color codes and other markers to indicate that a document has been produced by an AI or with its help is also common sense and some media have already anticipated this;
  • request that producers reverse engineer to produce AIs that monitor AI. Which is already the case with GPTZero  ;
  • start legal proceedings, in case of “hallucination” of ChatGPT – – another term anthropomorphized to mark an error of the system!
  • And remember that the more you use ChatGPT, in its free and paid version, the more you help it to improve.

In the educational field, EdTech marketing solutions tout the benefits of AI to personalize learning, facilitate data analysis, increase administrative efficiency, etc. But these metrics and statistics can in no way replace validation. skills acquired and to the productions of young people.

However intelligent it claims to be, AI cannot replace the need for students to develop their critical thinking and their own creativity, to train and inform themselves by mastering their sources and resources. As EdTech, particularly in the United States, rushes to introduce AI into classrooms, from primary school to higher education, the vigilance of teachers and decision-makers remains essential to preserve the central missions of schools and schools. ‘university. Collective intelligence can thus take over artificial intelligence.

Author Bio: Divina Frau-Meigs is Professor of Information and Communication Sciences

Tags: