
Artificial intelligence (AI) is made up of data, chips, and code, but also of the stories and metaphors we use to represent it. Stories matter . The imagery surrounding a technology determines how the public understands it and, therefore, guides its use, design, and social impact.
That’s why it’s worrying that, according to most studies , the dominant representation of AI has little to do with its reality. The ubiquitous images of humanoid robots and the anthropomorphic narrative of chatbots as “assistants” and artificial brains are appealing from a commercial or journalistic standpoint, but they are based on myths that distort the essence, capabilities, and limitations of current AI models.
If the way we portray AI is misleading, how will we truly understand this technology? And if we don’t understand it, how can we use it, regulate it, or align it with our interests?
The myth of autonomous technology
The distorted representation of AI is framed within a widespread confusion that theorist Langdon Winner already dubbed in 1977 as “autonomous technology”: the idea that machines have taken on a kind of life of their own and act on their own in a deterministic and frequently destructive way on society.
AI now offers the perfect embodiment of that vision, because it flirts with the myth of the creation of an intelligent and autonomous being… and the punishment that comes with usurping that divine function . An age-old narrative pattern that runs from Frankenstein to Terminator, from Prometheus to Ex Machina .
The myth of autonomous technology is already hinted at in the ambitious term “artificial intelligence,” coined by computer scientist John McCarthy in 1955. The term proved to be a success despite causing numerous misunderstandings , or perhaps because of that.
As Kate Crawford points out in her book Atlas of AI : “AI is neither artificial nor intelligent. Rather, it exists in a corporeal form as something material, made of natural resources, fuel, labor, infrastructure, logistics, histories, and classifications.”
Most of the problems with the dominant AI narrative can be attributed to this tendency to represent it as an independent, almost alien, incomprehensible entity, already beyond our control or our decisions.
Metaphors that confuse us
The language used by many media outlets, institutions, and even experts to talk about AI is riddled with anthropomorphism and animism, images of robots and brains, always false stories about machines rebelling or acting inexplicably , and debates about their supposed consciousness , not to mention a sense of urgency and inevitability .
This vision culminates in the narrative that has driven the development of AI since its inception: the promise of AI general (AI), a supposed human-level or superhuman intelligence that will change the world or even the species. Companies like Microsoft and OpenAI , and technology leaders like Elon Musk, have consistently predicted AI general as an imminent milestone .
However, the truth is that the path to that technology is not clear, and there is not even a consensus on whether it will ever be possible to develop it.
Narrative, power, and bubble
The problem isn’t merely theoretical. The deterministic and animistic view of AI constructs a predetermined future. The myth of autonomous technology serves to inflate expectations about AI and divert attention from the real challenges it poses, thus hindering a more informed and pluralistic public debate about the technology. In a landmark report , the AI Now Institute refers to the IAG promise as “the argument to end all arguments,” a way of avoiding any questioning of the technology.
In addition to a mix of exaggerated expectations and fears, these narratives are also responsible for inflating the potential AI economic bubble that various reports and technology leaders have warned about . If this bubble exists and eventually bursts, it’s worth remembering that it was fueled not only by technical achievements but also by a portrayal that was as impactful as it was misleading.
A narrative shift
Fixing the broken narrative of AI requires foregrounding its cultural, social, and political dimensions. That is, leaving behind the dualistic myth of autonomous technology and adopting a relational perspective that understands AI as the product of an encounter between technology and people.
In practice, this narrative shift involves moving the focus of representation in several ways: from technology to the humans who guide it, from a techno-utopian future to a present under construction, from apocalyptic visions to present risks, from AI presented as unique and inevitable to an emphasis on autonomy, choice, and the diversity of people.
Various strategies can drive these shifts. In my book Technohumanism : Towards a Narrative and Aesthetic Design of Artificial Intelligence , I propose a series of stylistic recommendations for escaping the narrative of autonomous AI. For example, avoid using it as the subject of a sentence when its role is that of a tool, or refrain from attributing anthropomorphic verbs to it.
Playing with the term “AI” also helps us see the extent to which words can change our perception of technology. What happens when we replace it in a sentence, for example, with “complex task processing,” one of the less ambitious but more accurate names considered to describe the discipline in its early days?
Key debates about AI, from its regulation to its impact on education and employment, will continue to straddle murky waters until we correct the way we represent it. Crafting a narrative that makes the socio-technical reality of AI visible is an urgent ethical challenge that will benefit both technology and soc
Author Bio: Pablo Sanguinetti is Professor of AI and Critical Thinking at IE University