Artificial intelligences should be able to reason their explanations

Share:

About a year ago, a small American company in the medical field, Geisenger , published surprising results on the application of artificial intelligence (AI) to estimate the short-term risk of death of patients with certain heart conditions, based on their electrocardiogram .

To do this, they trained a neural network with almost two million electrocardiograms of almost 400,000 people. In the end this AI, based on a technique known as “deep learning”, obtained better results than those of cardiologists. In addition, the doctors were not able to subsequently find any pattern or sign of risk in those electrocardiograms that the machine had correctly identified where they had failed.

The AI ​​had found something in the electrocardiographic signal that human experts were not able to detect.

The problem is that the AI ​​was only designed to get the best possible response, but not to give a reason for it. This is what is known in the field as a “black box” due to its lack of transparency and, therefore, the ability to know and give an explanation of its results.

Explainable AIs are endowed with the ability to explain their operation. They are able to communicate their results and the reasoning process they have followed to obtain them, in a way that is easy for people to understand. There are two main approaches to designing machines like this.

On the one hand, we can think of them as white boxes (transparent or semi-transparent). For example, those that are based on computationally tractable representations of human knowledge. Known as “expert systems”. However, this approach is not possible or desirable on many occasions, either due to design difficulties or insufficient performance.

Let’s open the black boxes

Another approach is to open and see inside those black boxes. This is the case of AI based on deep learning or, in general, on artificial neuron networks. These use learning architectures made up of very simple mathematical models of neurons. These, in a number of thousands or tens of thousands, and profusely interconnected – generally layered, like a lasagna – total hundreds of thousands or even millions of connections. These connections are associated with values, called “weights”, which are essential in the process of obtaining a response to each input applied to the neural network.

This response may correspond, depending on the case, with a diagnosis, with the identification of an object on an image or with the translation into Spanish of a phrase in English. The proper selection of these weights is performed during network training on training data sets. The process is usually very demanding in terms of the number and representativeness of the examples to learn about, as well as the computational resources required to do so.

The ability to learn from this type of network is enormous, but what they learn is distributed, without any apparent relation to each other, in an endless number of parameters or weights. That is why we can ignore what one of these AIs is based on to detect an incipient pneumonia or a potentially cancerous nodule on a chest X-ray. Exaggerating, it would be like trying to find an explanation for how a radiologist reaches the same conclusion from a functional MRI of your brain.

No matter how capable the machines are to give us answers, if they affect us significantly we need them, in addition to being good, to be understandable. Otherwise we will not trust them. It is an issue that is increasingly present in the legislative framework of the countries. In fact, the new European General Data Protection Regulation grants the right to an explanation of the decisions that affect people, no matter who (or what machine) makes those decisions. In addition to technical and legal issues, there are also ethical issues to consider, as highlighted in the guidelines for trustworthy AI published by the European Commission .

The Citius (University of Santiago de Compostela) is committed to training explicable IA from primary school to university and coordinates the first European network for training researchers in this field. It is the NL4XAI network , an acronym for the project entitled “Natural Language Technologies for Explainable Artificial Intelligence”.

This network’s main objective is to train expert researchers in explainable AI. 18 partners participate in it, between universities and companies, from 6 countries, who collaborate in the training of 11 doctors. A common goal will be to use natural language technologies to build conversation agents capable of explaining themselves by interacting with people. Furthermore, these agents will be able to handle verbal and non-verbal information, providing their users with multimodal explanations (that is, mixing visual and textual or narrative explanations).

We would like an AI to be able to explain, as a judge does, the foundations of a sentence. Also that he could explain to us how he manages to recognize the accused despite having grown a beard from the last sight, something that people, even if we are capable of doing it, cannot explain how, but we will talk about this another day.

Author Bios: Senén Barro Ameneiro is Director of the CiTIUS-Singular Center for Research in Intelligent Technologies of the University of Santiago de Compostela and Jose Maria Alonso Moral is a Ramón y Cajal Researcher in the Intelligent Systems group both at the University of Santiago de Compostela

Tags: