| News | Blog | What is explainable artificial intelligence? How explanations influence trust in human-AI interaction
What is explainable artificial intelligence?

What is explainable artificial intelligence? How explanations influence trust in human-AI interaction

AI-supported assistance and decision-making systems are now used in many different ways. Explanations are particularly important in decision-critical fields of application such as medicine or finance to ensure that trust in the system and its output is adequately adapted to the situation. The question therefore arises: What are meaningful explanations and how do they influence the trust and behaviour of users?


AI systems are increasingly supporting decisions in important areas of life such as finance, justice and medicine. As such decisions can have far-reaching consequences for those affected, it is particularly important that they are comprehensible and verifiable. Modern AI systems are usually based on statistical models that make decisions based on probabilities. The internal model structure can be extremely complex. Although this enables AI systems to process large amounts of data and recognise patterns, it is generally not possible to understand how they arrive at their results. This is why such systems are often referred to as black boxes: Users only see inputs and outputs, but not the path in between. As the underlying data and decision-making processes are not transparent, it can be difficult to detect errors or unintentional bias. Especially in safety-critical applications, it is therefore important to understand the reasons behind a decision.

Explainable AI can increase transparency and contribute to a better understanding

The research field ofExplainable AI (XAI for short) deals with the question of how complex and opaque systems can be explained. It includes methods that attempt to make the decisions of AI systems understandable, comprehensible and verifiable. This should enable users to decide whether or not they trust the outputs of an AI system and adapt their behaviour accordingly.

Explanations fulfil several functions:

  • They help developers to recognise errors and biases and to improve models and make them fairer.
  • They allow users to better understand and evaluate results, leading to informed decision making
  • They are a basis for regulation and accountability, for example in the sense of the EU AI Act.

But not every declaration is helpful. Sometimes explanations can be incomprehensible or confusing or lead to misunderstandings. It is therefore important to weigh up when an explanation is really useful and which explanation helps which target group.

Types of explanation at a glance

There are a variety of methods for making the decisions of AI systems comprehensible. Depending on the data the system is working with, explanations can be provided in the form of text, images or graphics.

Three types of explanation are particularly common:

1. Example-based explanations

The system shows concrete comparative cases:

For example, if a loan application is rejected without justification, this can be very frustrating for applicants. In order to justify such a decision and make it comprehensible, the decisive criteria can be named or comparable cases can be described:
“Your application was rejected because it is similar to another case from the past that was also rejected: The applicant had a similar annual income and a similar amount of debt. Your application would have been accepted if your income had been 5,000 euros higher.”

Such explanations are intuitively understandable as they are based on human patterns of explanation – for example, the way people compare and contrast case studies in everyday life.

2. Relevance-based explanations

Relevance-based explanations show which features or input parts contributed most to the decision.

This type of explanation is particularly suitable when visual or structured data is involved and users want to know what the model has taken into account.
In Figure 1, for example, a saliency map shows which parts of the image (pixels) the model used in image-based quality control to recognise an industrial defect and therefore a faulty product. Saliency maps are also used for medical image diagnostics, among other things.

What is explainable artificial intelligence? - How explanations influence trust in human-AI interaction
Figure 1: AI image recognition of industrial defects
Source: Gramelt, D./Höfer, T./Schmid, U. (2024). Interactive explainable anomaly detection for industrial settings. In: European Conference on Computer Vision (pp. 133-147). Cham: Springer Nature Switzerland. https://arxiv.org/abs/2410.12817

In the case of tabular data, decision-relevant features can be illustrated in a bar chart, for example. Positive values of the features speak in favour of the class, negative values against. Figure 3 shows an example in which the AI system has assigned a penguin to the chinstrap penguin species, with the length and depth of the bill being particularly decisive.

What is explainable artificial intelligence? - How explanations influence trust in human-AI interaction
Figure 2: Explanation when classifying tabular data: Bill length and depth are in favour of a chinstrap penguin. Source: Figure from the experiment of the Ethyde project.

3. Concept-based explanations

Here, the decision is explained by concepts that are understandable to humans, such as categories or abstract properties that the system has “learnt”. For traffic signs, important concepts are shape, colour and symbols: For example, general danger signs consist of a red triangle on a white background and the content of the danger, while speed signs consist of a red circle with the number of the maximum permitted speed.

What is explainable artificial intelligence? - How explanations influence trust in human-AI interaction
Figure 3: Binary concept annotations (present/not present) for image recognition of traffic signs. Source: From Heidemann, L. (2023). Concept-based models – how visual concepts help to understand the decision of an AI. Fraunhofer Institute for Cognitive Systems IKS. Image link.

Do explanations guarantee trust?

Explanations can make the way AI systems work more transparent and help users to better judge their results. They can therefore strengthen trust in AI systems, but also prevent blind trust. The effect of an explanation depends heavily on how it is designed, who it is aimed at and the context in which it is provided. If explanations are too complex, too long and are also presented in an unsolicited or intrusive manner, they can be cognitively overwhelming or even arouse mistrust – especially if explanations appear contradictory. Trust is therefore not automatic, but a process that needs to be encouraged through targeted communication. Trust in AI is multi-layered and context-dependent. It depends on the expertise of the user, the type of information and the importance of the task. Other situational and cognitive factors can also play a role, such as whether users have the feeling that they can make their own decisions instead of just being “convinced”.

Target group-appropriate explanations

For explanations to be meaningful and appropriate, two fundamental questions should be answered:

  1. Who is the explanation intended for?
  2. What objective should the statement fulfil?

Which target group will ultimately deal with the explanation and the output of the AI system is the most important factor here.

  • Laypersons (end users without domain expertise or an AI background):
    Need an understandable, comprehensible explanation, for example in the form of a comparison with similar cases or simple causes.
  • Domain experts (technical experts without an AI background):
    Need to assess the reliability and fairness of the AI system and know whether, for example, discriminatory factors have been used improperly.
  • AI experts / developers:
    Need technical explanations on model structure, debugging and model adaptation.

    Depending on the target group, the type of explanation and the complexity of the explanation must be adapted. Explanations do not have to reveal every detail. It is much more important to explain the right aspect at the right time. Context-related explanations in simple language or with examples are often far more helpful than abstract technical system descriptions, especially for laypeople.

Conclusion: Meaningful explanations promote reflected trust

What constitutes a good explanation depends heavily on the context. Therefore, there is no one method that is the best solution in all cases. It is part of the interface between man and machine. Explanations can help people not to blindly trust AI systems, but also not to distrust them across the board. To achieve this, explanations should be understandable, contextualised and embedded in a meaningful way.

Further reading

Gramelt, D., Höfer, T., & Schmid, U. (2024, September). Interactive explainable anomaly detection for industrial settings. In European Conference on Computer Vision (pp. 133-147). Cham: Springer Nature Switzerland.

Schmid, U., & Wrede, B. (2022). What is missing in xai so far? an interdisciplinary perspective. AI Artificial Intelligence, 36(3), 303-315.

Samek, W., Schmid, U. et al. (2025): Comprehensible AI: Explaining for whom, what and for what. DOI: https://doi.org/10.48669/pls_2025-2

The blog posts published by bidt reflect the views of the authors; they do not reflect the position of the Institute as a whole.