
When you ask ChatGPT, Claude, or any other “Chat” a question on any subject, it responds as if it were an omniscient interlocutor. Yet, this language is produced statistically, by integrating the multiplicity of contexts—which allows it to respond appropriately and differently each time, depending on the context of the utterance—by aggregating immense amounts of data (large language models or LLMs). This specificity introduces a new dimension: if the machine speaks like a human, even though language was perceived by a number of philosophers, foremost among them René Descartes, as the index of thought, and therefore of the recognition in the other of their “humanity,” how can we distinguish the human from the machine?
This question, at the heart of the Turing test, might seem rhetorical; however, numerous practices attest to this confusion, as generative artificial intelligence (AI) is sometimes used as an assistant, as a friend, and can even function as a psychologist . Even in the way we ask it questions, we establish a dialogue with it and are thus subject to a very natural anthropomorphic projection as soon as the other—machine or human—responds to us. The way we address our “chat” bears witness to this: we sometimes speak to it politely, often using the second-person singular “You.”
How then can we rethink language if it is no longer an indicator of conscious thought? And how can we distinguish human language from machine language? In its structure, its syntax, its coherence, it is identical.
However, the fact that the texts produced by generative AI will soon be mostly derived not from texts written by humans, but from other texts generated by AI, poses an initial problem of referentiality.
A statistical production disconnected from the truth
As we have known since the work of linguist Roman Jakobson, language has several functions (to inform, to connect, to create bonds, beauty, etc.). The referential function is what links language to reality and makes it the locus of truth in the sense of an adequacy between a statement and the reality it describes. This is the famous definition of Thomas Aquinas (c. 1225-1274): ” Veritas est adaequatio rei et intellectus ” (“Truth is the correspondence of the thing and the intellect”). Thus, “only statements can be true or false. Things, on the other hand, even if, through a misuse of language, they are sometimes called ‘true’ or ‘false,’ are real or unreal, authentic or artificial.” But they cannot be “true”,” we can read in the article from the Encyclopædia Universalis on truth in the general sense .
Thus, the statement “it’s a nice day” makes sense if it is a nice day, and it is supposed to give information about the weather, for multiple purposes (organizing one’s day, choosing whether or not to take one’s bike, etc.). What is the point of saying “it’s a nice day” if not to communicate this information, or to create a link with another simply by the fact that I am addressing them (this is called the phatic function of language).
Certainly, writing will mediate the very idea of communication, but it remains the vehicle of knowledge, information, and a relationship between the one who reads, the one who writes, and that which the writing is about.
Yet here language is freeing itself from its referential and phatic functions.
The production of language becomes autonomous from reality
The utterance produced by generative artificial intelligence no longer points to an external source, and this is structurally true, since it operates statistically, taking context into account, using digital databases. Mediation risks becoming exponential if AI-generated texts end up replacing human-generated ones. Generative AI algorithmically produces an utterance from itself that, by definition, has no communicative intention. It is the product of a calculation.
What lesson can we draw from this? That the very structure of language production becomes autonomous from reality: we cannot blame the machine for not looking up at the sky to confirm that the weather is nice.
Thus, the very condition of truth is eliminated. In “Truth and Politics,” the philosopher Hannah Arendt distinguishes between “factual truth” and “reasonal truth,” referring the latter to scientific truth and the former to “what has taken place,” in other words, a minimal reality, the condition of the common good. It is precisely this truth that totalitarian ideologies have called into question, substituting for reality a more or less coherent system of ideas or beliefs. But mass democracies are not exempt: for Arendt, advertising also offers a substitute for reality.
Today, ideology is no longer necessary to replace our relationship with the world with a discourse detached from it. It is the very condition of enunciation that renders the category of “factual truth” obsolete, since generative artificial intelligence, in its very functioning, does not refer to reality to produce language, even if a secondary link remains, as the statistical production of LLM (Language-Language Modeling) originates from statements produced outside of LLM. The separation is complete between producing language (which is nevertheless supposed to be the locus of truth) and the reality to which language refers.
Thus, the post-truth in which we now live is structurally consolidated: it is not simply a matter of indifference to truth, but rather a production of content detached from, or independent of, the very possibility of truth or falsehood, even if a large number of texts that feed LLM still originate from humans. Ideology does not reside in what is said, produced, or written; it lies in the emancipation of language production from reality and from the very idea of referentiality. Generative artificial intelligence did not invent post-truth, but through its operation, it consolidates its structure.
A language formatted in advance by private companies?
Added to this is the fact that this production is the monopoly of private companies. We live in a capitalist world, as everyone knows, whose principle is that the means of production are concentrated in the hands of a few. This is what Marx called the infrastructure , with the superstructure designating all other spheres—politics, culture. Now, today, the infrastructure produces language. And language underlies all superstructures: as the linguist Klemperer says , it is the most public and the most secret means of propaganda at the same time. Public, because we cannot live in society without language; secret, because we are unaware of the extent to which language is permeated by norms that shape us more than we shape them, and which we in turn perpetuate through our speech.
“Each era corresponds to well-defined techniques of reproduction,” wrote Walter Benjamin . Technique influences the use of language: in the 19th century, the mass press transformed the way of writing, leading to a new literary genre – the novel – which Benjamin, in The Storyteller, contrasts with the narrative, but also the proliferation of a sensationalist press, interested in miscellaneous events and offering an attractive “narrative.”
Language would therefore be increasingly subservient to its technical means of production. Certainly, one can consider that it was already so in its public use, but this is now also the case for intimate, professional, and friendly uses, in other words, for almost all uses, including when we have no need for technology to communicate: we sometimes correspond by email when we share the same office, attend meetings by videoconference when a few meters separate us, communicate via Instagram sitting side by side… This has decisive consequences, particularly on politics, and more specifically on democracy, whose primary material is precisely language and the various rights associated with it.
“To be political,” wrote Hannah Arendt , “to live in a polis, meant that all things were decided by words and persuasion, not by force or violence.” She added that it was by appearing before everyone that words became political. A space was therefore necessary for them to be heard, a “public” space for “political speech.”
But what is a public space and what is political speech, when language emancipates itself in its production, both from reality and from the subject of enunciation? By being an autonomous producer of language without referentiality, generative AI technically fulfills the fantasy of an enunciation without a subject.
The fact that language is no longer what distinguishes machines from humans has both political and metaphysical consequences. Our relationship to reality is being transformed by invisible mediations that privatize language: it no longer allows us to recognize an “other” in the sender or receiver. Yet, language only has meaning when addressed to another human being. For the machine, it is asemantic. Saving the meaning of language means saving the very idea of the subject. Only then will it retain its emancipatory power.
Author Bio: Mazarine Pingeot is Professor of Philosophy at Sciences Po Bordeaux
Mazarine M. Pingeot is the author of Inappropriable. What AI Does to Humans , (Flammarion, February 2026)