In the winter of 2016, the Google Nest home automation manager made a software update to its thermostats that caused battery damage . A large number of users were left disconnected, though many were able to change batteries, buy a new thermostat, or wait for Google to fix it. The company indicated that the failure would have been caused by the artificial intelligence (AI) system that managed these updates.
What would have happened if the majority of the population used one of those thermostats and the failure left half the country exposed to the cold for days? A technical problem would have become a social emergency that would have required state intervention. All because of a faulty artificial intelligence system.
No jurisdiction in the world has developed a comprehensive and specific regulation for the problems generated by artificial intelligence. This does not mean that there is a complete legislative vacuum: many of the damages that artificial intelligence can cause have other ways of responding.
For example:
- For accidents caused by autonomous cars, insurance will continue to be the first recipient of claims.
- Companies that use artificial intelligence systems for their job selection processes may be sued in case of engaging in discriminatory practices.
- Insurers that engage in anti-consumer practices derived from the analysis generated by their artificial intelligence models to set prices and decide who to insure will continue to have to answer as companies.
In general, other regulations that already exist –such as contract law, transportation, damages, consumer law, even regulations for the protection of human rights– will adequately cover many of the regulatory needs of artificial intelligence.
In general it does not seem enough. There is a certain consensus that the use of these systems will generate problems that cannot be easily solved in our legal systems . From the diffusion of liability between developers and professional users to the scalability of damages, AI systems defy our legal logic.
For example, if an artificial intelligence finds illegal information on the deep web and makes investment decisions based on it, should the bank that manages the pension funds or the company that creates the automated investment system be held accountable for those illegal investment practices? ?
If an autonomous community decides to incorporate a co-payment for medical prescriptions managed by an artificial intelligence system and that system makes small errors (for example, a few cents on each prescription), but which affect almost the entire population, who is responsible for the lack of initial control? The administration? The contractor installing the system?
Towards a European (and global) regulatory system
Since the presentation in April 2021 of the European Union regulation proposal for the regulation of artificial intelligence, the so-called AI Act , the slow legislative process that should lead us to a regulatory system for the entire Economic Area has been launched. Europe and, perhaps, Switzerland, in 2025. The first steps are already being noted with state agencies, which will exercise part of the control over the systems.
But what about outside the European Union? Who else wants to regulate artificial intelligence?
On these issues we tend to look at the United States, China and Japan, and we often assume that legislation is a matter of degrees: more or less environmental protection, more or less consumer protection. However, in the context of artificial intelligence, it is surprising how different the visions of legislators are.
USA
In the United States, the fundamental legislation on AI is a norm of limited substantive content, more concerned with cybersecurity that, instead, refers to other indirect regulatory techniques, such as the creation of standards. The underlying idea is that the standards developed to control the risk of artificial intelligence systems will be voluntarily accepted by companies and will become their de facto standards .
In order to maintain some control over those standards, instead of leaving it to the discretion of the organizations that normally develop technical standards and are controlled by the companies themselves, in this case the AI systems risk control standards are being developed by a federal agency ( NIST ).
The United States is thus immersed in a process open to industry, consumers, and users to create standards. This is now accompanied by a White House draft for an AI Bill of Rights , also on a voluntary basis. At the same time, many states are trying to develop specific legislation for certain specific contexts, such as the use of artificial intelligence in job selection processes.
China
China has developed a complex plan to not only lead the development of artificial intelligence, but also its regulation .
To do this, they combine:
- Regulatory experimentation (certain provinces may develop their own standards to, for example, facilitate the development of autonomous driving).
- Development of standards (with a complex plan that covers more than thirty subsectors).
- Hard regulation (for example, of recommendation mechanisms on the Internet to avoid recommendations that could alter the social order).
For all these reasons, China is committed to regulatory control of artificial intelligence that does not impede its development.
Japan
In Japan, on the other hand, they do not seem particularly concerned about the need to regulate artificial intelligence.
Instead, they trust that their tradition of partnership between the state, companies, workers and users will prevent the worst problems that artificial intelligence can cause. At the moment they focus their policies on the development of society 5.0.
Canada
Perhaps the most advanced country from a regulatory point of view is Canada. There, for two years, every artificial intelligence system used in the public sector must undergo an impact analysis that anticipates its risks .
For the private sector, the Canadian legislature is now discussing a standard similar to (although much more simplified) than the European one. A similar process was started last year in Brazil. Although it seemed to have lost momentum, it can now be rescued after the elections.
From Australia to India
Other countries, from Mexico to Australia, passing through Singapore and India, are in a situation of waiting.
These countries seem confident that their current rules can be adapted to prevent the worst damage that artificial intelligence can cause and allow themselves to wait and see what happens with other initiatives.
Two parties with different visions
Within this legislative diversity, two parties are being played.
The first, among the supporters that it is too soon to regulate a disruptive technology –and not well understood-– such as artificial intelligence; and those who prefer to have a clear regulatory framework that addresses the main problems and at the same time creates legal certainty for developers and users.
The second game, and perhaps the most interesting, is a competition to be the global de facto regulator of artificial intelligence.
The commitment of the European Union is clear: first create rules that bind anyone who wants to sell their products in their territory. The success of the General Data Protection Regulation , which is today the global reference for technology companies, encourages the European institutions to follow this model.
Faced with them, China and the United States have chosen to avoid detailed regulations, hoping that their companies can develop without excessive restrictions and that their standards, even voluntary, become the reference for other countries and companies.
In this, time plays against Europe. The United States will publish the first version of its standards in the coming months. The European Union will not have applicable legislation for another two years. Perhaps the excess of European ambition is going to have a cost, inside and outside the continent, creating rules that when they come into force have already been surpassed by other regulations.
Author Bio: José-Miguel Bello y Villarino is a Researcher Fellow Automated Decision-Making and Society ARC CoE / Diplomat (on leave) at the University of Sydney