| News | Dictionary | What is … Explainable AI

What is … Explainable AI

Have you heard of Explainable AI? It's a topic that AI researcher Prof. Ute Schmid can shed some light on. AI stands for Artificial Intelligence, but what exactly does Explainable AI entail? Let's find out.

© M. Dörr & M. Frommherz / stock.adobe.com

Artificial intelligence, AI for short, is a mystery to many. As the results of a study published by bidt this summer show: very few people can explain what AI is. But that is not the point of Explainable AI.

To approach the term, it makes sense to start with the definition of artificial intelligence. “AI deals with algorithmic solutions to problems that humans can do better at the moment,” says Ute Schmid. The AI professor and director at bidt often has to clear up misconceptions about the technology. Those who are not computer scientists find it difficult to recognise algorithmic challenges. Precisely what people find particularly easy is often difficult to replicate with algorithms. For example, it’s easy for a human to see if there’s a cat in a picture, and no one would consider themselves particularly intelligent because of that. On the other hand, it is a great challenge for an AI system to reliably recognise a cat in a picture, for example, in different lighting and against different backgrounds. It is also anything but easy for an AI algorithm to identify whether the image is of a real cat or a stuffed cat.

Currently, deep learning approaches are used for such object recognition tasks. For this, the programme is “shown” tens of different cat pictures, with which it is trained to recognise a cat.

The strength of AI algorithms lies in the fact that they can be used in various application areas. For example, a deep neural network can be trained with animal images to recognise cats. If, on the other hand, it is trained with medical data, it can learn to identify tumours.

The success of deep neural networks has sparked great interest in many application areas. However, the models learned with such neural networks are black boxes. It is not comprehensible based on which information such an AI system makes decisions.

It is precisely this problem that is addressed by the new research field of Explainable AI. For AI systems to be usable in practice, especially in safety-critical and sensitive areas such as medical diagnostics, it is crucial that the experts, for example, in medicine, can understand how an artificial system comes to decide whether a specific tumour is present, for example, based on an image. Or to return to the example of the cat: Does the system recognise the cat because it “sees” whiskers, fur and ears in a particular form or based on entirely different information?

The problem is that models learned with deep neural networks are black boxes, and not even the developers can understand how such an AI system arrives at decisions.

Prof. Dr. Ute Schmid To the profile

This entails the risk that an AI system will make the wrong decisions.

“Or it decides correctly, but based on wrong reasons, for example, because it has learned in the training data: Where there is a meadow, there is a horse.” So, in this case, a system would come to possibly wrong conclusions based on irrelevant information.

This is precisely where Explainable AI comes in:

Explainable AI is about making the decisions of the black box comprehensible.

Prof. Dr. Ute Schmid To the profile

It is relevant for various target groups: for the developers themselves, of course, so that they can recognise errors in the programme, but also for subject matter experts who use AI, for example, in medicine. The term Explainable AI is, therefore, somewhat misleading. It would be better to speak of Explanatory AI.

bidt dictionary: Explainable AI

Artificial intelligence (AI) systems are so complex that it is impossible to understand what information they use to make decisions. This is known as a black box. It is precisely this problem that is being addressed by the new research field of Explainable AI. The goal is to make the decisions of the black box comprehensible. This is important so that not only the developers but also the users, for example, in medicine, can understand how the technology they are working with comes to its conclusions.

A research project at bidt

In the project, Professor Ute Schmid, in cooperation with Professor of Software Engineering Alexander Pretschner and Professor Eric Hilgendorf, philosopher of law, is developing models that enable learning in the interaction between humans and artificial intelligence.

The goal is to develop an AI system combining a model-based approach and machine learning that can be used to analyse the causes of errors and accidents in complex areas – be it traffic accidents or misdiagnoses. In the process, the AI system is to learn continuously in interaction with field experts to build more accurate causal models. The combination of explainable learning with interactive learning is an innovative approach to making machine learning usable to support experts in complex and safety-critical socio-technical areas.