The influence and manipulation of citizens – or consumers in a more economic sense – is a practice as banal as it is ancient, between advertising that has existed for a very long time, meetings and electoral rallies likely to convince the public, or even the more traditional word-of-mouth within families and friendly circles, which leads to consulting the opinion of others before making a decision. It sometimes occurs involuntarily, by inducing behaviors below the threshold of awareness of users, and sometimes, of course, intentionally.
However, digital technology has caused a real revolution. Artificial intelligence (AI) tools in particular make it possible to personalize manipulation practices, with better knowledge of the person being manipulated and faster adaptation to their behavioral profile. These digital techniques are also often implemented by powerful economic operators, using multiple channels to retrieve consumer data and having unprecedented firepower (websites, social networks, search engines, chatbots, generative AI services to formulate prompts, etc.). The result is an asymmetry of means that makes it possible to mistreat free will and exploit users’ low awareness of the risk of manipulation.
This is why the European Commission is drawing up a taxonomy of systemic risks generated by general-purpose generative AIs (those that can produce content such as text, code, images, sound, video, for example ChatGPT, Copilot, Gemini, Grok, Sora). The latest version of this taxonomy (2024) is based on the analyses of world experts — notably those of Joshua Bengio , winner of the 2018 Alan Turing Prize in Computer Science, but also those of John Hopfield, 2024 Nobel Prize in Physics , and Daron Acemoglu, 2024 Nobel Prize in Economics.
Systemic risks include, among others, the risks of large-scale malicious manipulation of humans by AI (electoral manipulation, attacks on fundamental rights, etc.), the risks of large-scale illegal discrimination by AI systems making high-stakes automated decisions, and the risks of loss of control of AI by humans.
Initiatives to regulate AI are multiplying
Also, many recent texts, French or European, take into account, in digital law, this risk of manipulation. In this regard, consumer law appears to be a particularly relevant observatory since the rules aimed at regulating the influence and manipulation of consumers online have multiplied, especially since 2019.
Examples include, for example but not limited to, the prohibition of advertising targeting minors and the prohibition of “dark patterns” that lead Internet users to make choices that are not always in their favor (amplification of the recommendation of certain offers of products, services and content; correlative invisibility of other offers of products, services and content; addictive techniques tending to capture the consumer’s attention; goods or services placed by default in the shopping cart; difficulties in unsubscribing, etc.) in the regulation on digital services ; the prohibition of unfair techniques for collecting and processing personal data in the European regulation on data protection (GDPR); national regulation of influencers operating on online platforms; or the prohibition of certain subliminal and manipulative practices by artificial intelligence systems or the regulation of hyper-tampering, deepfakes , in the European regulation on AI ).
Based on this observation, and in view of the proliferation of texts that demonstrate the crucial issues in terms of preserving human free will, we conducted a collective research project entitled “Towards a neuro-ethical law?” which aimed to establish a synthesis, then conduct a prospective analysis, on the digital techniques of consumer influence and manipulation. It was based on the meeting of specialists, both academics and practitioners, from different disciplines, including legal, computer science, neuroscience, sociology, psychology, economics and management.
Is this multiplication a sign of the inability to effectively grasp these techniques?
However, it follows from these discussions that the proliferation of texts seems rather to be a sign of the inability to effectively grasp these techniques, rather than the fair and appropriate response of the legal system to techniques which can be seriously detrimental to consumers.
In addition to the fact that the proliferation of regulations is accompanied by that of internal and European regulatory and control authorities (Arcom, CNIL, DGCCRF, European Commission), which can undermine the effectiveness of the rules, the texts contain internal contradictions which contribute to blurring the effectiveness of the rules put in place.
For example, the AI Act completely prohibits subliminal practices (below the threshold of consciousness) and deliberately manipulative or deceptive techniques with the aim of altering the behavior of a person or group of people. But this principle of prohibition contradicts the authorization in principle of deepfakes subject to having informed the consumer, authorization set out in Article 50 of the same regulation. In practice, if a deepfake leads a consumer to make a decision, without them being aware that they have been influenced by this deepfake , it should be prohibited. But what should be done when the deepfake is accompanied by a banner indicating that the image was generated by an AI? Should we consider that it is authorized? Does this deepfake completely lose its potential deceptive effects?
The fight against manipulative AI practices is currently based on uncertain notions
Moreover, each legal provision relating to the fight against manipulative AI practices, taken individually, also contains its share of uncertainties, which contributes to weakening this fight.
Among the uncertainties, there is, first of all, the one concerning the very definition of AI. For example, an AI system is defined in the European AI Act as:
“an automated system that is designed to operate at different levels of autonomy and can exhibit adaptive capacity after deployment, and that, for explicit or implicit purposes, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments” (Article 3.1)
This definition, although aligned with the OECD definition , raises many questions, particularly regarding the criterion for distinguishing AI from simple classic computer software.
At this stage, the uncertainties are such that the European Commission’s Office for Artificial Intelligence has launched a public consultation , with a view to clarifying not only the definition of AI but also the conditions for characterising prohibited AI practices , including, precisely, the practice of AI manipulating humans below the threshold of consciousness, consisting of:
“the placing on the market, putting into service or use of an AI system that uses subliminal techniques, below the threshold of a person’s consciousness, or deliberately manipulative or deceptive techniques, with the aim or effect of substantially altering the behaviour of a person or group of people by significantly impairing their ability to make an informed decision, thereby leading the person to make a decision that they would not have made otherwise, in a way that causes or is reasonably likely to cause significant harm to that person, another person or group of people”.
These uncertainties and weaknesses in the text can be explained in particular by the fact that one of the criteria for the ban, “manipulation below the threshold of consciousness”, is based on a concept, “consciousness”, which is not the subject of any scientific consensus , neither among neuroscientists nor among philosophers, and is even at the heart of current debates in these disciplines.
Basing a ban, accompanied by a heavy administrative fine , on a conceptual criterion with contours that do not generate consensus, poses numerous difficulties, not only with regard to a fundamental principle of law that is respect for the principle of legality of offences and penalties , but also, in that this criterion does not offer a real guarantee of the effectiveness of the rule tending to protect humans in general, and consumers in particular, against manipulative AI practices.
Towards the consecration of a new European text to regulate AI practices in the same way as other digital practices
It is for this reason that the European Commission has just announced, on October 3, 2024, that it intends to propose a new European regulation, the Digital Fairness Act , aimed at protecting consumers more effectively against unfair digital practices, whether these practices are based on AI or not.
The announcement of this future “regulation on digital fairness to combat unethical commercial techniques and practices related to rigged interfaces, marketing by influencers on social media, addictive design of digital products and online profiling, in particular when consumers’ vulnerabilities are exploited for commercial purposes” thus highlights the fact that the legal protection of consumers against manipulative AI practices is still in its early stages of construction.
Author Bios: Sabine Bernheim-Desvaux is Professor of Law at the University of Angers and Juliette Senechal is Professor of Private Law at the University of Lille