Do we need an agency to oversee artificial intelligence?


We often come across advertisements for products labeled “with artificial intelligence”. But very rarely, if ever, is the user informed of more details. Who guarantees us that, effectively, this product works based on artificial intelligence (AI)? Who certifies that the answers it offers us are the correct ones? What code of ethics does it obey? Without knowing anything about it, when for some reason that product does not work as expected, it is normal to conclude that the AI ​​is not reliable, much less the algorithms it uses.

But do we know if the product we are referring to works based on an algorithm? Are the solutions you provide always bad or are they only bad depending on the user? Does the ethical code of the person who designed that product coincide with ours? And, by the way: who defines what is ethical behavior? How is ethics measured?

What exactly is an algorithm?

An algorithm is an ordered sequence of steps, free of ambiguity, that when carried out faithfully in a finite time, results in the solution of the problem posed, thus having performed the task for which it was designed. Thus, for an algorithm to be correct, it needs to fulfill the following:

  • must always terminate after a finite number of stages;
  • the actions to be carried out at each stage must be rigorously specified, without ambiguity;
  • the values ​​with which it starts to work have to be taken from pre-specified sets;
  • the result you provide will always depend on the input data;
  • all the operations that have to be performed in the algorithm must be basic enough to be done exactly and in a finite period of time.

When any of these properties is not given, we do not have an algorithm.

Algorithms, therefore, are not like cooking recipes, which can have imprecise rules, and as a consequence produce results that are as different as they are unpredictable. They are iterative processes that generate a succession of points, according to a given set of instructions and a stopping criterion. As such, they are not subject to technological restrictions of any kind, that is, they are absolutely independent of the technological equipment available to solve the problem they face. It is the program in which the algorithm is written, the software , which executes it on a computer.

When this program and the algorithm on which it is based are designed with AI techniques and methodologies, and therefore based on the behavior of people, problems can sometimes arise when knowing and accepting the decisions made by said program.

We can feel threatened by believing that these machines do our jobs better than us or because nobody knows how to explain their behavior when they act out of the expected, producing undesirable biases , unsustainable solutions or, in short, because we consider that they do not behave ethically. .

But that is the version that is seen from the user’s side, that is, from the one who observes how that system behaves. Because from the designer’s side, sometimes the results conform exactly to what was consciously foreseen in the source algorithm and the corresponding software that reaches the users.

Ethical contradictions

The range of possibilities is more than wide. They range from improper behavior due to fortuitous errors, undesirable but unavoidable, and which in any case must be reckoned with, to the totally self-interested marketing of AI-based systems in which their proper use is not guaranteed in any sense or, even worse, they are not based on AI.

In any case, and from a very general point of view, it is obvious that the ethical level of a certain AI-based software is context-dependent. While in a certain field an action can be branded as unethical, in others it can be understood as appropriate. This gives rise to contradictory situations that are difficult to compare because we do not have the tools that allow us to assess the ethical behavior of an AI-based system, nor the legislation that regulates it.

Therefore, and even if only for these reasons that have been briefly described, it was essential to include the creation of a Spanish Agency for the Supervision of Artificial Intelligence (AESIA) in the additional provision one hundred and thirty of Law 22/2021, of December 28 of General State Budgets for the year 2022 .

As proposed, the AESIA will act with full organic and functional independence from public administrations, in an objective, transparent and impartial manner. It will carry out measures aimed at minimizing significant risks to the safety and health of people, and to their fundamental rights that may arise from the use of AI systems.

Likewise, the agency will be in charge of the development, supervision and monitoring of the projects framed within the National Artificial Intelligence Strategy , in addition to those promoted by the European Union, in particular those related to the regulatory development on AI and its possible uses.

Where will it be located?

In order to select the location of the agency, Royal Decree 209/2022, of March 22, establishes the procedure through which the municipal term in which its physical headquarters will be located will be determined.

The agency will function as an independent public authority in charge of guaranteeing society, from a public service perspective, the quality, safety, efficacy and correct information of AI-based systems, from their research to their use. To do this, it will have objective evaluation, certification and accreditation mechanisms for AI-based systems, the current absence of which we discussed at the beginning of this article.

The election of the physical headquarters of the AESIA will correspond to an advisory Commission, by the way, already created.

Regardless of the municipality in which it is installed, its operation will have to strictly adhere to each and every one of the above principles. Therefore, it does not seem opportune that the different candidatures that apply to host its headquarters come endorsed by companies in the sector, whether national or international. The direct or indirect intervention of these companies could call into question the necessary independence of the agency.

It is another thing that the business sector, which plays a key role in everything related to the development of AI, endorses and supports the government initiative about the creation of the AESIA, but without betting on a specific location.

But the AESIA cannot remain just a national agency that acts in isolation. It should be aligned with the organic structure that the European Union designs, particularly with the future European Artificial Intelligence Board .

The agency will also have to comply with European legislation. There is already a draft regulation of the European Parliament and Council that establishes harmonized rules on AI. The so-called Law of Artificial Intelligence, seeks to define a metric to assess the social impact of algorithms in the industrial sector, demand algorithmic transparency, its explainability, and accredit its ethical quality.

In conclusion, in response to the question we posed at the beginning, the AESIA is not only essential, but we need it to start operating as soon as possible independently, with credibility and with sufficient means. But none of this will be effective without legislation that regulates the production, use and operation of AI-based systems; that secures and protects people and that is as consensual at European level as possible.

Author Bio: Jose Luis Verdegay Galdeano is Professor of Computer Science and Artificial Intelligence at the University of Granada