These images that inform or disinform: on digital networks, cultivating critical thinking

Share:

The recent context of the elections in the United States and the controversies surrounding the image of Elon Musk making a Nazi salute have shown the need for education on these photos and videos that circulate massively online. The regulation of uses and technologies is an issue even if Europe measures the limits of the means at its disposal to subject foreign platforms on its territory to its own rules.

The evolution of digital technologies makes it easy to support all sorts of theories, and manipulations are increasing as artificial intelligence (AI) becomes more sophisticated.

Under these conditions, it is essential to give citizens the means, beyond regulations, to fight against information disorders linked to images by supporting fact-checking tools but also training in the school environment.

At a time when young people have a much more familiar relationship with images than before, what are the challenges of Media and Information Education (EMI)  ? This question will be at the center of the event “Living in a world of image(s): what uses, risks and education for young people?” organized by France Universités, in collaboration with CLEMI, on March 6, 2025, at the Université Paris-Cité.

Images, the foundation of a generational intimacy

Young people’s relationship with images is based on massively digital uses, with television and cinema screens shared mainly within the family. According to the Born Social study , the platforms most used by young people (aged 11 to 19) are YouTube and WhatsApp, then Snapchat, TikTok, and finally Instagram – platforms where images are dominant.

The advantage of the image is that it is effective, accessible, understandable by all and impactful. If they are big consumers, young people are also producers. These new types of images ( memes , gifs, etc.) circulate massively and are often constructed from extracts from films or series, news images or videos, photographs of famous people (historical, political, legendary, from the entertainment world).

This taste for diverted images allows for the creation of a generational intimacy around common references. It also gives the possibility of expressing emotions, ideas linked to their daily lives, of denigrating the adult world and anxiety-provoking current events, or even of building a shared humor.

Dealing with streams of images disconnected from their context

The ephemeral nature of image content on platforms like YouTube, Instagram, Snapchat and TikTok is changing the way young people perceive images. This impacts their relationship with information because, unlike the written press and television, these images are neither hierarchical, nor contextualized, nor explained. This fundamental difference redefines the notions of memory, archive but also source and veracity.

Sensationalist images are the ones that circulate the most. Negative and virulent emotions tend to generate a much higher rate of engagement and views than positive sentiments. With algorithms filtering the most viewed images, regardless of their content, platforms have become “echo chambers” that favor shocking images that spread at high speed, without the traditional barriers of information.

This fosters an environment where like-minded individuals come together, creating information bubbles in which users are exposed to images that mostly match their views, unlike the diversity found in traditional media. This results in a polarization of opinions. Radicalization trends benefit from this system that brings together large communities to organize conspiracy, intimidation or harassment operations.

Another observation is that fake news, visual propaganda and everything related to post-truth are favored on social networks, and these contents spread quickly without sanction. The result is a society where post-truth is gaining ground on the veracity of facts as evidenced by the presidential elections in Romania or the Covid-19 pandemic .

Misinform, disinform, misinform?

Post-truth raises important societal issues concerning public trust in official institutions, in the figure of the politician, the journalist or the scientist, and in the reality of the facts which is dangerously relegated to the background in favor of distrust, conspiracy theories and sensationalism.

How can a young person alone in front of a screen sort the truth from the falsehood of an image?

The “attentive perception” of young people must be developed and supported to understand how images can misinform or disinform, and in several ways, by playing on emotions, perceptions and contexts. The Arcom barometer gives alarming indicators on the current context. We must help distinguish the levels between misinformation, disinformation, conspiracy theories, fake news, all terms that are used in social and media discourse.

Although similar, these concepts refer to different realities and practices. The difference between misinformation, malinformation and disinformation is mainly based on intentionality. Disinformation and malinformation are always the result of an intention. Malinformation is information based on reality, used to inflict harm on a person, a social group, an organization or a country. Misinformation, on the other hand, is unintentional. Disinformation encompasses the domain of fake news, malinformation and the fabrication of fake news .

What underlies disinformation is the intentionality of mass-producing fake news for profit or political gain in order to influence, destabilize or even harm (an organization, a state, a community, democracy, etc.). In some countries, fake news can indeed be deadly, especially when it is created to stir up hatred in one community against another. The field of disinformation is all the more problematic because it benefits from increasingly effective dissemination strategies that AI technologies are reinforcing.

Exercising vigilance with media and information literacy (MIL)

While young people know that images can be altered or retouched using software (which can distort reality), they are not always aware that this can be done with malicious or even propaganda intent. This is why they need to be given the tools and critical means to put images into context, prioritize them, and compare sources.

An image can indeed be presented out of its original context, which changes its meaning. For example, a photo of an event can be used to illustrate a completely different subject, giving a false impression of what is really happening, or be chosen to encourage people to believe that the situation is more serious than it is in order to act on negative affects and runaway logic.

It is increasingly difficult to have reliable benchmarks, to find certified information since even the media are susceptible to abuse. Thus, everything related to generative AI is an educational issue .

EMI and knowledge of fact-checking tools provided by institutions such as Clemi (Center for Media and Information Literacy), Arcom, associations and the European Commission and the media help to train to distance themselves. This is all the more important since AI is part of their daily lives and that, every day, disinformation issues through images are emerging with this technology.

Author Bio: Pauline Escande-Gauquié is Professor of information and communication sciences and semiologist at Sorbonne University

Tags: