Artificial Intelligence: Beyond the Hype, a Technological Bubble Ready to Burst?

Share:

Artificial intelligence (AI) is often presented as the next revolution that will change our lives. Since the launch of ChatGPT in 2022, generative AI has generated real excitement worldwide. In 2023, NVidia, a key player in the manufacture of chips used to train AI models, exceeded $1 trillion in market valuation. In France, a €900 million investment plan was launched, accompanied by significant fundraising by companies such as Mistral AI ( €105 million ) and Hugging Face ( $235 million ).

Yet this enthusiasm is accompanied by doubts. Indeed, while AI is in the media spotlight, its concrete economic impact remains modest and its adoption by companies limited. A recent study estimates that barely 5% of companies actively use AI technologies in their processes, whether it is generative AI, predictive analysis, or automation systems. In some cases, AI is even criticized for distracting executives from more pressing operational issues .

This gap between expectations and concrete results raises the question: is AI simply going through a “hype cycle ,” where excessive enthusiasm is quickly followed by disillusionment, as we have seen with other technologies since the 1990s? Or are we witnessing a real decline in interest in this technology?

From the origins of AI to ChatGPT: waves of optimism and questions

The history of AI is marked by cycles of optimism and skepticism. As early as the 1950s, researchers imagined a future populated by machines capable of thinking and solving problems as efficiently as humans. This enthusiasm led to ambitious promises, such as the creation of systems capable of automatically translating any language or perfectly understanding human speech. However, these expectations proved unrealistic given the limitations of the technologies of the time. Thus, the first disappointments led to the “AI winters” of the late 1970s and then again in the late 1980s, periods when funding fell in the face of the inability of the technologies to live up to their stated promises.

However, the 1990s marked a major turning point thanks to three key elements  : the explosion of big data, the increase in computing power, and the emergence of more powerful algorithms. The Internet facilitated the massive collection of data, essential for training machine learning models . These vast datasets are crucial because they provide the examples needed for AI to “learn” and perform complex tasks. At the same time, advances in processors made it possible to run advanced algorithms, such as deep neural networks, which are the basis of deep learning . They made it possible to develop AIs capable of performing previously inaccessible tasks, such as image recognition and automatic text generation.

These increased capabilities have rekindled hopes of seeing the revolution anticipated by the pioneers of the field, with AIs ubiquitous and efficient for a multitude of tasks. However, they come with major challenges and risks that are beginning to temper the enthusiasm surrounding AI.

A gradual realization of the technical limits that today weigh on the future of AI

Recently, stakeholders attentive to the development of AI have become aware of the limits of current systems , which can slow down their adoption and limit the expected results.

First, deep learning models are often referred to as “black boxes” due to their complexity, making their decisions difficult to explain. This opacity can decrease user trust, limiting adoption due to fear of ethical and legal risks.

Algorithmic bias is another major issue. Current AIs use huge volumes of data that are rarely free of bias. AIs thus reproduce these biases in their results, as was the case for example with Amazon’s recruitment algorithm , which systematically discriminated against women. Several companies have had to backtrack because of bias detected in their systems. For example, Microsoft removed its chatbot Tay after it generated hateful remarks, while Google suspended its facial recognition tool that was less effective for people of color.

These risks make some companies reluctant to adopt these systems , for fear of damaging their reputation.

The ecological footprint of AI is also a concern. Advanced models require a lot of computing power and generate massive energy consumption . For example, training large models like GPT-3 would emit as much CO₂ as five round trips between New York and San Francisco . In the context of the fight against climate change, this calls into question the relevance of a large-scale deployment of these technologies.

Overall, these limitations explain why some initial expectations, such as the promise of widespread and reliable automation, have not been fully realized, and face real-world challenges that may slow down the enthusiasm for AI.

Towards a measured and regulated adoption of AI?

AI, already well integrated into our daily lives, seems too entrenched to disappear, making an “AI winter” like those of the 70s and 80s unlikely. Rather than a lasting decline in this technology, some observers instead speak of the emergence of a bubble . The announcement effects, amplified by the repeated use of the term “revolution,” have indeed contributed to an often disproportionate excitement and the formation of a certain bubble. Ten years ago, it was machine learning  ; today, it is generative AI. Different concepts have been popularized in turn, each promising a new technological revolution.

Google trends.

Yet modern AI is far from being a “revolution”: it is part of a continuity of past research, which has made it possible to develop more sophisticated, efficient and useful models.

However, this sophistication comes at a practical cost, far removed from the flashy announcements. Indeed, the complexity of AI models partly explains why so many companies find AI adoption difficult . Often large and difficult to master, AI models require dedicated infrastructure and rare expertise, which are particularly expensive. Deploying AI systems can therefore be more costly than beneficial , both financially and energy-wise. For example, it is estimated that an algorithm like ChatGPT costs up to $700,000 per day to operate, due to the immense computing and energy resources required.

Added to this is the regulatory issue. Principles such as minimizing the collection of personal data, required by the GDPR, contradict the very essence of current AIs. The AI ​​Act , in force since August 2024 , could also call into question the development of these sophisticated systems. It has been shown that AIs such as OpenAI’s GPT-4 or Google’s PaLM2 do not meet 12 key requirements of this act . This non-compliance could call into question current methods of developing AIs, thus affecting their deployment.

All of these reasons could potentially lead to the bursting of this AI bubble, which prompts us to reconsider the exaggerated representation of its potential in the media. It is therefore necessary to adopt a more nuanced approach, by reorienting the discourse towards more realistic and concrete perspectives that recognize the limits of these technologies.

This awareness should also guide us towards a more measured development of AI, with systems better adapted to our needs and less risky for society.

Author Bio: Kathleen Desveaud is Doctor of Management Sciences, Professor of Marketing at Kedge Business School

Tags: