In Hyperion , a science fiction novel written by Dan Simmons in 1989, the characters are connected to a network of artificial intelligences, called the “datasphere”. By allowing them to have instant access to any information, knowledge is immediately available, but the ability to think for oneself is now lost.
More than thirty years after the publication of Dan Simmons’ work, it is possible to consider the ever-increasing impact of artificial intelligence (AI) on our cognitive abilities in similar terms. In order to reduce these risks, I propose here a solution that defends both the progress resulting from AI and the need to respect our cognitive abilities.
AI offers many benefits. These include opportunities to advance social justice , combat racism , improve the detection of certain cancers , reduce the consequences of the climate crisis and boost productivity .
But the darker aspects of AI are also much discussed and taken into account in its development, notably its racial biases , its tendency to reinforce socio-economic inequalities and its capacity to manipulate our emotions and behaviors .
Towards the first Western regulations of AI
Despite these ever-increasing risks, there are still no national or international rules regulating AI. This is why the European Commission’s proposal to establish regulation of the uses of AI is so important.
This proposal from the European Commission, the latest version of which received approval and was modified by the two ad hoc committees of the European Parliament at the beginning of June 2023, is based on the risks inherent in any use of this technology and classifies them into three categories: “unacceptable”, “high” and “other”.
For risks from the first category, the use of AI is prohibited. These are the following cases:
- Cognitive-behavioral manipulation of vulnerable people or groups, which can or could cause bodily or cognitive harm.
- Exploiting the vulnerabilities of a specific group of people, so that AI can modify the behavior of these people and cause harm.
- Social scores: classify individuals according to their conduct and socio-economic status.
- Real-time and remote biometric identification systems, except in special cases (for example in the event of a terrorist attack).
In this European legislation relating to AI, the notions of “unacceptable” risks and damage are closely linked. This is an important step and suggests the need to protect certain activities and demarcated physical spaces from AI interference. With my colleague Caitlin Mulholland, we have shown how our fundamental rights and in particular our right to privacy depend on stronger regulation of AI and facial recognition applications .
[ More than 85,000 readers trust The Conversation newsletters to better understand the world’s major issues . Subscribe today ]
This makes particular sense with regard to the use of AI in automated judicial decisions and border control . Debates around ChatGPT have also raised concerns about its impact on our intellectual abilities.
Sanctuaries without AI
All of these cases raise questions about the deployment of AI in areas where our fundamental rights, our privacy and our cognitive abilities are at stake. They also point out the need to create spaces where strong regulation of activities linked to AI applies.
It is possible to define these spaces by borrowing an ancient term, that of sanctuaries. In her work , “The Age of Surveillance Capitalism”, Shoshana Zuboff defines the right to sanctuary as a remedy for the excesses of all power. Sacred places, such as temples, churches and monasteries, allowed persecuted communities to find refuge there. Today, and in order to resist digital surveillance, Zuboff updates and reinterprets this right to sanctuary through strong regulation of digital activities so that we can still benefit from the “space of an inviolable refuge “.
The notion of “sanctuaries without AI” does not imply an outright ban on AI but a real regulation of the applications resulting from this technology. In the case of European Union legislation relating to AI, this would amount to putting in place a more precise definition of the notion of damage. For the moment, there is no clear and unambiguous definition of this idea of damage, neither in European legislation relating to AI nor between Member States. As Suzanne Vergnolle suggests, one solution would consist of establishing criteria common to all Member States in order to identify the types of damage resulting from manipulative practices linked to certain applications of AI. Additionally, harms based on racial profiles and socioeconomic statuses should also be considered.
The establishment of AI-free sanctuaries also means much firmer regulation aimed at protecting us from cognitive and mental damage resulting from potential uses of AI. A starting point would be to establish a new generation of rights – “neuro-rights” – which would protect our cognitive freedom with regard to the development of neurotechnologies. Roberto Andorno and Marcello Ienca thus argue that the right to mental integrity, which is already protected by the European Court of Human Rights, should apply beyond cases of mental illness and protect us against intrusions by AI.
A manifesto of sanctuaries without AI
I would like to defend the right to “sanctuaries without AI”. It would include the following articles (which are of course provisional):
- The right to withdraw. In areas deemed sensitive, anyone has the right to withdraw from AI-based support, for a period which they are free to decide. This right implies no or moderate interference from AI-based devices.
- No sanction. Withdrawing from an AI device will never result in economic or social disadvantages.
- The right to human decision. Every individual has the right to a final decision made by a human person .
- Vulnerable people and sensitive areas. Public authorities will establish, in collaboration with actors from civil society and industry, particularly sensitive areas (health, education) and groups of people, such as children, who should not be or be moderately exposed to systems AI intrusives.
Sanctuaries without AI in the physical world
Until now, AI-free spaces have only been very unevenly implemented, from a spatial point of view. Certain educational establishments in Europe and the United States have therefore decided to exclude all screens from classrooms , thus following the principles of the low-tech/no-tech movement in the field of education . It has in fact been proven that the use of digital media in the field of education is not productive and causes dependencies among the youngest . However, more and more public schools, with few resources, tend to use screens and digital tools, which would contribute to worsening social inequalities .
Even outside of secure spaces like classrooms, AI continues to expand. In the United States, between 2019 and 2021, a dozen municipalities approved laws banning the use of facial recognition in policing. Since 2022, however, many cities have reversed course in order to counter an increase in crime.
Even if they reinforce inequalities, facial recognition systems are used during certain job interviews . To the extent that these systems are powered by the data of candidates who have previously passed the selection processes, the AI tends to select candidates coming from a privileged context and to exclude those and those from more diverse backgrounds. Such applications should be banned.
And despite upcoming EU legislation on AI, AI-based video systems will monitor spectators and crowds at the 2024 Paris Olympics. This automated video surveillance will further be tested during the Paris Cup. Rugby World .
AI-driven internet search engines should also be banned, since the technology is not yet developed. As Melissa Heikkiläa points out in a 2023 [ MIT Technology Review ] article, “AI-generated text appears trustworthy and credentialed, discouraging users from verifying the information they receive.” . There is also a dose of exploitation, because “users are now testing this technology for free”.
Supporting progress while preserving our rights
The right to AI-free sanctuaries guarantees the technological development of AI while protecting our emotional and cognitive capacities. The possibility of having the choice to withdraw from AI ( opt out ) is crucial if we wish to preserve our abilities to learn, to live experiences autonomously and to protect our moral judgment.
In Dan Simmons’ novel, one of the protagonists (“cybrid” replica of the poet John Keat) is not connected to the DataSphere and can therefore resist the threats of Artificial Intelligence. This point is illustrative because it highlights the importance of debates surrounding the intromission of AI into art , music , literature and culture. Indeed, in addition to questions relating to intellectual property , these activities are closely linked to our imagination and creativity, capacities which also form the basis of our possibilities to resist and think for ourselves.
Author Bio: Antonio Pele is Associate Professor, Law School at PUC-Rio University & Marie Curie Fellow at IRIS/EHESS Paris & MSCA Fellow at the Columbia Center for Contemporary Critical Thought (CCCCT) w/ the HuDig19 Project at Université Paris Nanterre – Université Paris Lumières