Recent months have seen rapid and unexpected advances in artificial intelligence (AI). We can create images at will with tools like Midjourney or DALL-E or ask questions and chat with ChatGPT . And this also poses unprecedented ethical, social and legal challenges.
Technical advances that radically change our way of life always bring unknowns and confusion. When the train began to replace horses, well-founded concerns (such as the problems of breathing smoke) were mixed with others that ended up not being (such as the fear that travelers would suffocate in tunnels ). With time and perspective, the panorama became clearer.
With AI, we are still in the confusion phase: we must weigh its risks and its benefits to create a regulation that, without slowing down progress, guarantees responsible use. Let’s look at some of the most relevant points in this debate.
A delicate raw material: data
Behind tools like Midjourney or ChatGPT are algorithms that learn to perform tasks from large amounts of data. For example, in order for Midjourney to create images from text, it required collecting billions of images with their descriptions, downloading them from the Internet . Hence an intellectual property conflict arises: is it legal to use content protected by copyright to teach these systems?
Many artists think not: their works are being used to create other works, which jeopardizes their market. That is why they have denounced those responsible for systems of this type.
But there is a technical argument to the contrary: when learning, these systems do not copy or store the works in memory. They only use them to improve their knowledge of how to do homework. Something not so different from what a human artist does, that he allows himself to be influenced and inspired by the art that he has seen.
It will be the courts of the United States that will decide if this is a “ fair use ” of the data or not. Meanwhile, Adobe is working on an alternative that does not use copyrighted images without the consent of the creators.
Europe, more rigorous
Another conflict, this time focused on Europe, is that of data protection. EU law generally does not allow processing of anyone’s personal information without their consent. This applies even to data that is public on the Internet.
To learn how to chat, ChatGPT has needed hundreds of billions of words obtained from the Net. These texts can include mentions of people, and no one has removed them or asked for their consent. The problem is that, in this case, that seems impossible given the sheer volume of data: an “Adobe solution” is not feasible.
Therefore, a strict interpretation of the European regulations seems totally incompatible with systems like ChatGPT. Hence, Italy has banned it .
The downside is that such a measure seriously damages a country’s competitiveness. For example, this type of tools multiply the productivity of programmers . If a tech company wants to recruit, will it do so in a country where they are allowed or prohibited? The question answers itself.
Thus, European legislators face an uncomfortable situation: reconciling the protection of personal data with not missing the train of AI compared to countries with more lax regulations, such as the Anglo-Saxons.
How do we use it?
Another key aspect of AI regulation is what it is used for. It must be remembered that an algorithm is not per se ethical or not: it is a tool that someone uses for a purpose. For example, imagine a system that analyzes patient data and suggests a diagnosis . It can be very valuable to help a doctor, making the final decision at his own discretion. Instead, the same technology would be a danger if you make the final decision, substituting the doctor.
The EU is aware of this, and is preparing a regulation under the principle of “putting the person at the center”: AI, yes, but always under human supervision.
The problem is how to do it. Europe started with a few years ahead preparing certifications for a responsible use of AI, but it has delayed the process when ChatGPT broke in: now we must consider the use of a tool so versatile that anyone can use it for a multitude of purposes, ethical or not .
Could it turn against us?
We have discussed some of the legal and ethical challenges that AI poses today. But what will happen in the longer term? From a technical point of view, it is still not clear if it will be possible to continue advancing at such a frenetic pace as in recent years. But if that were the case, the regulatory aspects that we have seen would only be the beginning.
Hence the request to pause the development of new systems for six months , signed at the end of March by hundreds of experts and media figures.
In this sense, a much-commented challenge is the automation of many jobs. But reviewing history, humanity has always created technologies to lighten the workload, and today we would not give up any. The key is how to distribute work and wealth: the ideal would be to avoid unnecessary jobs (as denounced by the anthropologist David Graeber ) and inequality that prevents part of the population from accessing a source of income.
Another concern for the future is what will happen if we develop conscious AI systems. Google recently fired an engineer for saying one of its conversational systems was already conversational. According to the philosopher of mind David Chalmers, this does not appear to be so ; among other things, because systems like ChatGPT do not have memory or a stable personality.
But it could be done one day. If so, one would have to weigh the ethical implications of causing harm to a sentient being, facing dilemmas similar to those posed by cloning. We would also have to prevent the AI from turning against us, one of the motivations for the pause request.
In short, the latest advances in AI force a broad debate on how to regulate its use. We must pay attention to the risks, but without forgetting that technological revolutions have always improved our quality of life.
Author Bio: Carlos Gomez Rodriguez is Professor of Computer Science and Artificial Intelligence at the University of A Coruña