Will ChatGPT make us less gullible?

Share:


A few weeks ago, on November 30, 2022, the OpenAI company delivered a spectacular new artificial intelligence, ChatGPT , to the world . After DALL·E , which generates images from instructions written in plain language, ChatGPT is able to almost perfectly mimic entire discussions, or answer complex questions by producing texts that seem straight out of a human brain.

This new advance does not fail to worry, for economic reasons (with in particular the possible destruction of certain jobs ), ethical (with for example the risk to see the models of language like ChatGPT taking again racist speeches ), or “epistemic” , this type of AI not, to date, making the difference between reliable information and dubious information (the term “epistemic” refers to the production or acquisition of knowledge and reliable information).

However, there are reasons to think that the democratization of ChatGPT and colleagues could be good news, at least for our relationship to information.

Epistemic threats

“Artificial intelligence can be an epistemic danger because it can generate compelling but false information. It could challenge our understanding of the world or even endanger the validity of our knowledge. This has raised concerns about the possibility of using AI to spread misinformation or manipulate people’s beliefs. »
It’s not me saying it, it’s… ChatGPT itself! The preceding paragraph was generated by this AI by asking it this question: “In what way is artificial intelligence an epistemic danger?” As we can see with this example, the answers can be very convincing. And yet perfectly stupid. Sometimes stupidity is obvious, sometimes it is less easy to flush out.

In this case, while there isn’t much to say about the first sentence, the second is an empty cliche: what exactly does it mean to “question our understanding of the world” or ” jeopardize the validity of our knowledge”? The third sentence is simple nonsense: these AIs do not broadcast anything, and are perhaps not the best suited to “manipulate” (because we do not control well what they produce).

But that’s the problem: you have to think to discover the pot of roses.

Bullshit generator

What you have to understand is that ChatGPT is not programmed to answer questions, but to produce believable texts.

Technically, ChatGPT is what is called a “language model” . A language model is an algorithm, based on technologies developed in recent decades ( neural networks , deep learning, etc.), capable of calculating the probability of a sequence of words from the analysis of a corpus pre-existing texts. It is all the more efficient as the quantity of text that it has been able to “read” is large. In the case of ChatGPT, it is absolutely phenomenal.

Thus, given a certain sequence of words, ChatGPT is able to determine the most probable sequence of words that could complete it. ChatGPT can thus “answer” a question, in a necessarily credible way, since it calculates the most probable answer. But there is no logic or thought in this answer. There is nothing more than a calculation of probabilities. ChatGPT does not care in the least about the truth of its answers. In other words, it is a “bullshit” generator .

The “bullshit”, for a few years, is no longer only an Anglo-American interjection, translatable into French by “foutaise” or “fumisterie”, but also a philosophical concept, since the philosopher Harry Frankfurt made it the subject of an article and then a book in the 2000s.

Today, it is very serious researchers in psychology , philosophy , neuroscience or management science who are interested in bullshit. The concept has become more complex but we can retain its original definition here: bullshit is indifference to the truth. It’s not lying: the liar is preoccupied with the truth, so as to disguise it better. The bullshiteur, on the other hand, ignores it and only seeks to captivate — what he says can sometimes be right, sometimes not, it doesn’t matter.

This is exactly the case of the very talented ChatGPT: when it doesn’t fall right, it doesn’t show — or not immediately. A super bullshit generator, accessible to all, very easy to use? There is plenty to be worried about. One can easily imagine how this tool could be used very simply by unscrupulous content publishers to produce “information”, especially since ChatGPT seems to be able to deceive even academic experts on their own subjects .

Epistemic vices and virtues

What is at stake is a certain intellectual ethics . Contrary to widespread opinion, the production or acquisition of knowledge (scientific or not) is not just a matter of method. It is also a moral matter. Philosophers speak of “intellectual” (or “epistemic”) vices or virtues , which can be defined as character traits that hinder or, on the contrary, facilitate the acquisition and production of reliable information.

Open-mindedness is an example of epistemic virtue, dogmatism an example of vice. These notions have been the subject of an increasingly abundant philosophical literature since the beginning of the 1990s, the epistemology of virtues . Initially essentially technical, since it was a question of correctly defining knowledge, these works today also concern the epistemic problems of our time: disinformation, fake news, bullshit in particular, as well as of course the dangers raised by artificial intelligence .

Until recently, epistemologists of virtues discussing the epistemic consequences of AIs mainly focused on “deepfakes” , these videos entirely generated by AIs of the DALL E type, and which can depict very real individuals in scandalous situations perfectly. imaginary but strikingly realistic. The lessons learned from these reflections on deepfakes are useful for thinking about the possible effects of ChatGPT, and perhaps for qualifying an undoubtedly excessive pessimism.

The production of deepfakes is obviously a problem, but it is possible that their generalization could arouse in the public the appearance of a form of generalized skepticism towards the images, a form of “intellectual cynicism” . The author who formulated this proposal (in 2022) sees it as an epistemic flaw, because it would lead to doubting both gamy information and founded information. I’m not sure that such cynicism would be so vicious: it would be equivalent to returning to a time, not so long ago, when the image did not occupy such a large place for the acquisition of information. It does not seem to me that this era ( before the 1930s ) would have been particularly vicious epistemically.

Be that as it may, this cynicism could in turn encourage the development of an epistemic virtue: a certain “digital sensitivity” , which would make it possible to correctly discern the wheat from the chaff in the mass of images and videos circulating on the Internet.

Such digital sensitivity could also be stimulated by ChatGPT. The readers of the productions of this AI, scalded by the torrent of “bullshit” that it risks unleashing, could redouble their attention when reading an online text in the same way that they could redouble their attention when faced with to an image (for fear of being deceived by a deepfake) – without falling into a form of generalized skepticism.

From an evil could thus be born a good. Even more generally, the rise of these AIs could bring to the fore the need to cultivate epistemic virtues, and to combat vices, such as the all -too-common disposition not to question conspiracy theories circulating on social networks . Ultimately, these disturbing technologies could be good news for intellectual ethics.

Author Bio: Erwan Lamy is Associate Professor at ESCP Business School

Tags: