ChatGPT, an AI that speaks very well… but for what?

Share:


ChatGPT has taken center stage since its release on November 30, due to its impressive features, in particular for chatting and answering questions , even complex ones, in a natural and realistic way.

As we begin to have a little perspective on this tool, questions arise: what are the current and future limits of ChatGPT, and what are the potential markets for this type of system?

ChatGPT, a “Google killer”? Not necessarily…

ChatGPT is often described as a future competitor of Google, even as a “Google killer” for its search engine part: even if the tool sometimes produces bizarre answers, even downright false, it answers in a direct way and does not simply offer an ordered list of documents, like Google’s search engine.

There is certainly a serious potential danger for Google, which could threaten its position of virtual monopoly on search engines. Microsoft in particular (main investor in OpenAI, which otherwise has privileged access to the developed technology) is working to integrate ChatGPT with its Bing search engine , in the hope of regaining the advantage over Google.

However, there are several uncertainties surrounding such a prospect. Search engine queries are usually made up of a few words, or even a single word, such as an event or a personality name. ChatGPT is currently arousing the curiosity of a technophile population, but this is very different from the traditional, general public use of a search engine.

We can also imagine ChatGPT accessible through a voice interface, which would avoid having to type the request. But systems like Amazon’s Alexa have struggled to establish themselves , and remain confined to specific and limited uses (ask for movie times, the weather, etc.). 10 years ago, Alexa was seen as the future of the American distribution company, but today is a bit abandoned, because Amazon has never managed to monetize its tool , that is- that is, to make it economically profitable.

Can ChatGPT succeed where Alexa partly failed?

Other frameworks of use?

Of course, the future of ChatGPT shouldn’t be all about finding information. There are a host of other situations where you need to produce text: production of standard letters, summaries, publicity texts, etc.

ChatGPT is also a good writing aid. We already see different uses: requesting ChatGPT to start with a few paragraphs that can inspire and avoid the fear of the blank page  ; see what points the tool puts forward on a particular question (to check if it corresponds to what we would have said ourselves or not); ask for plan suggestions on a particular issue. ChatGPT is not a magic tool and cannot know what the user has in mind, so when faced with writing a complex document, it can only be a help.

We can obviously imagine more problematic uses and many articles have already been published in the press concerning for example the use of ChatGPT in education , with fears, justified or not. We can thus imagine students producing homework thanks to ChatGPT, but also teachers using the tool to write their assessments, or researchers producing scientific articles semi-automatically. There are plenty of stories about students in the press, but they won’t be the only ones making potentially problematic use of this kind of technology.

Of course, there are questions to be asked , but the technology is there and it is not going away. It therefore seems essential to talk about them, and to train pupils and students in these tools, to explain their interest and their limits, and to discuss the place they should have in the training.

Finally, at the extreme end of the spectrum of problematic uses, we obviously think of the production of fake news  : false information that can then be disseminated in industrial quantities.

These dangers should not be exaggerated, but they are real. Even if text detectors produced by ChatGPT begin to appear , these will necessarily be imperfect, because the texts produced are too diverse and too realistic to be able to be recognized 100% by a system… except by the OpenAI company itself. , obviously !

The limits of ChatGPT: when ChatGPT “hallucinates”

The mass of interactions with ChatGPT since it opened to the general public on November 30 has already identified some of its limitations.

ChatGPT generally provides correct answers, often bluffing… but if you ask him about areas that he does not master, or even if you invent a question that appears serious but is in fact absurd (for example on facts or people who do not exist), the system produces a seemingly equally serious response, but is in fact completely absurd or made up.

Author Bio: Thierry Poibeau works at DR CNRS, Ecole Normale Supérieure (ENS) – PSL

Tags: