Transparency with regard to the reasons for AI-supported algorithmic decisions is a necessary prerequisite for the responsible use of artificial intelligence (AI). Against the backdrop of upcoming regulatory measures such as the AI regulation in Europe’s AI Act, a study by the Mozilla Foundation analyses various current approaches to AI transparency and their limitations.
To this end, interviews were conducted with 59 transparency experts from various organisations and sectors. The results show that there are currently too few incentives for both developers and managers to create more AI transparency. When developing AI systems, the focus is primarily on accuracy and debugging and not on the transparency and traceability of their decisions.
According to the study, there are several reasons for this. For example, trust in existing approaches for explainable AI (XAI) is very weak. Although XAI is a growing and active field of research, the majority of research is classified as basic research. Accordingly, XAI approaches are rarely used effectively in products. From a corporate perspective, a focus on explainability and transparency is therefore rarely worthwhile.
In addition to the fundamental problem of algorithmic explainability and the continued dominance of non-explainable black box models, there are other potential reasons for the lack of transparency in AI applications. These include bias in the training data and the metrics on which the algorithm is based. However, as the study found, existing biases and the algorithm-internal objective of an AI system are almost always hidden at the user level.
These problems are exacerbated by the fact that the standards demanded by legislators with regard to the traceability of AI-supported decisions are usually only relatively imprecise, which is why AI transparency in all its facets is rarely a priority in the development of AI systems.
To summarise, the report highlights the need for greater awareness and emphasis on AI transparency and provides practical guidance for effective transparency design. In the absence of adequate explanatory solutions, developers are encouraged to use interpretable models rather than black box solutions for applications where traceability is a design requirement. Meaningful transparency aims to ensure that sufficient and understandable explanation is available to each stakeholder group by providing useful and actionable information to enable informed decisions.