Generative AI, a major player in a society of disinformation?


Recent advances in generative artificial intelligence (these tools which make it possible to produce text, sound, images or videos completely automatically) raise fears of a resurgence of false information. This fear is exacerbated by the fact that many elections will take place in the coming months, starting with the European elections. What is it really ?

Generative AI, a false problem?

It should already be observed that, although the idea that generative AI is a source of danger in terms of disinformation is widespread, the opposite point of view also exists. Thus, for researchers Simon, Altay and Mercier, the arrival of generative systems does not fundamentally change the situation , neither on a qualitative level, nor on a quantitative level.

They note that traditional sources of information continue to dominate (the vast majority of people get their information through traditional media, which retain greater power of influence ). The public which obtains its information from alternative media and which “consumes” false information is, according to them, already fed with such sources and is not so much looking for precise information as for information which confirms their ideas (based on a generalized distrust towards politicians and the media).

Their study contradicts the common view, seeing AI as a major source of danger for democracy. It is based on surveys which effectively show the weight of ideology in terms of information consumption (we are oriented according to our ideology when we obtain information, and a classic bias consists of wanting to confirm what we believe , when several interpretations of an event are possible).

Very easy to use tools

It seems that increasing text production capacities is not the essential element: it is the capacity to disseminate information which plays a major role. This is also true for images and videos, but generative AI still seems to create a real breakthrough here. Getting to grips with a tool like Photoshop is long and complex; conversely, AI tools like Dall-e and Midjourney for images, or Sora for video, make it possible to generate realistic content from just a few keywords, and we know the weight of the image in information. The possibility of automatically creating fake videos with the voice, and even the movement of the lips rendered in a hyper-realistic way , also creates a new state of affairs, which was not imaginable just a few months ago.

Finally, note that AI-generated document detection tools are very imperfect and no solution currently makes it possible to determine 100% whether a document is of human origin or not. Automatic marking (watermarking, code undetectable to the naked eye, but indicating that a document was generated by an AI) could help, but there will obviously always be groups capable of producing files without marking, alongside large platforms with a storefront (these are processes which are not yet implemented on a large scale, but which could be with the evolution of legislation).

A fractured society

But, beyond that, the argument shows above all that AI is not the essential point in this problem, but above all a human and social question. The consumption of false information is often motivated by feelings of opposition towards established institutions and social bodies, perceived as having failed in their mission. The Covid crisis provided a recent illustration of this, with the rapid emergence of high-profile figures, in direct and systematic opposition to the proposed measures, and very supported by their supporters on social media.

For many individuals, the spread and consumption of misinformation is a way to question authority and oppose the status quo. By rallying those who share similar views, the spread of false information can also serve to create a sense of belonging and solidarity within groups that oppose those in power. In this context, disinformation becomes a tool for building communities united by common values ​​or goals , thus strengthening their cohesion and resilience in the face of established power structures. This dynamic therefore leads to increased polarization and divisions within society , it is even an almost claimed objective of certain transmitters of false information , who do not hide it.

The spread of disinformation is therefore favored by “society bills” where social, political and economic divisions are pronounced ( a phenomenon widely studied by Jérôme Fourquet  ; Ipsos also regularly conducts surveys on this theme ).

In these contexts, individuals may be more likely to believe and spread conspiracy theories, rumors, and misinformation that align with their biases, fears, or frustrations. A fragmented society is characterized by a lack of mutual trust and increasing polarization, which creates fertile ground for the spread of disinformation. Social cohesion and mutual trust play a crucial role in preventing the spread of disinformation and maintaining the democratic health of a society.

Finally, the human factor is important in the production of false information. Automatic “bots” mass producing text have almost zero influence (except to drown the information within a mass of texts). We often underestimate the human factor, which remains essential to produce content that will have an impact, even for false information. The still recent discovery of effective networks, but using relatively rudimentary methods, is proof of this.

Geopolitical issues

The problem of disinformation therefore goes far beyond the framework of generative AI or even that of a few isolated individuals. It is largely fueled by powerful organizations, often with quasi-state resources, that deploy significant means to propagate false information on a large scale (e.g. the Internet Research Agency based in St. Petersburg).

These organizations set up networks including websites, a strong presence on social networks, automated bots, but also involve real individuals , bribed or not, responsible for relaying this misleading information (we therefore see that the network of propagation of information is as important if not more important than the production of content itself). This disinformation strategy aims to influence public opinion, sow confusion and manipulate democratic processes, thereby endangering trust in institutions and the credibility of elections.

To effectively counter this phenomenon, it is crucial to take technical, political and social measures to identify, counter and raise public awareness of large-scale orchestrated disinformation. Online platforms are particularly in demand .

The strategy of spreading false news pursues a double objective, which represents a double pitfall for established institutions. Indeed, by disseminating erroneous information, not only do we pollute the public debate by sowing confusion and blurring the lines of truth, but we also nourish a general climate of distrust towards any form of authority and “official” information. “. The authorities in place, already subject to strong discredit and perceived as being in a weak position, are struggling to react effectively to this proliferation of disinformation. Widespread doubt about their ability to act with transparency and impartiality reinforces the impression that their actions could be motivated by hidden interests. Thus, existing institutions find themselves trapped in a vicious circle where their credibility is constantly called into question, making them all the more vulnerable to attacks orchestrated by those who seek to destabilize the established order.

The challenge is therefore to protect freedom of opinion and freedom of information, while fighting against the spread of false information which can harm democratic functioning. This line between these fundamental principles is often difficult to draw, and authorities must juggle these complex issues. In some cases considered egregious, measures have been taken to counter attempts to manipulate public opinion and destabilize democratic processes. Television channels like RT, suspected of being under Russian influence, have been closed. Political figures have been questioned due to suspicions of corruption and foreign influence. Likewise, social media is closely monitored, and accounts or networks linked to foreign powers have been closed. These measures aim to protect the integrity of democratic processes and preserve public trust in institutions, while preserving the fundamental principles of freedom and pluralism. However, finding the right balance between protection against disinformation and respect for individual freedoms remains a constant challenge in democratic societies.

Author Bio: Thierry Poibeau is DR CNRS at École Normale Supérieure (ENS) – PSL