AI is rapidly changing everyday life. But what about acceptance? What is needed to strengthen people’s trust in AI? Bärbel Brockman addresses these questions in the cover story “A question of trust” in the Spiegel supplement “StarkesLandBayern”. In her article, the author sheds light on the extent to which people can trust AI and how research can help to promote trust. Numerous research projects in Bavaria are dedicated to the central question of how trust in these systems can be increased – including those of the new bidt research focus “Humans and Generative AI: Trust in Co-Creation”, which is presented in detail in the article. The aim of the focus area, which was launched at the beginning of 2025, is to find out when and under what conditions people can trust generative AI, both in the handling, creation or evaluation of AI-generated content. Twelve interdisciplinary research projects – funded by the Bavarian Ministry of Science – will investigate ethical, legal, technical and social issues surrounding the topic of trust and AI until 2029.
We are convinced that digitalisation research cannot be conducted from just one perspective, but that many perspectives are needed in order to achieve relevant and meaningful results.
Prof. Dr. Hannah Schmid-Petri To the profile
The fact that practical relevance is crucial is proven by the bidt research project “For the Greater Good? Deepfakes in Law Enforcement (FoGG)”. The potential of deepfakes is huge: police authorities could, for example, use voice generation to pretend to criminals that they are in contact with their partners and thus find out meeting points and other details. However, before such technology can be used, a number of questions need to be clarified, such as what is legally and ethically permissible. A research team at the University of Bayreuth has set itself the task of answering these questions, because “whenever you interfere with fundamental rights in the course of criminal prosecution, you need a specific legal regulation. In the case of deepfakes, however, it is not just about the fundamental rights of the deceived person, but we are also encroaching on the general personal rights of the person whose voice or image we are using because we are copying the identity characteristics of this person”, explains Christian Rückert, a criminal law expert on the research team.
The use of generative AI is also often viewed with concern in politics, especially when it comes to issues such as electoral fraud or attacks by hostile states. Andreas Jungherr, bidt Director and Professor of Political Science, in particular Digital Transformation at the University of Bamberg, is researching the positive effects that the use of AI can have on political processes such as elections in the project “Generative Artificial Intelligence in Election Campaigns: Applications, Preferences and Trust”.
For political parties, the use of AI in the background is often also promising. For example, by using a chatbot on their website to provide simple, understandable and personalised access to the election programme.
Prof. Dr. Andreas Jungherr To the profile
Under certain circumstances, campaigners could use AI to design appealing election campaigns without spending a lot of time and money. A second focus of the project is public acceptance: the aim is to analyse the impact of the use of AI in politics and what people think about it.
The bidt is funding numerous other research projects in order to close this trust gap and find ways in which AI can be used sensibly and in a regulated manner in a wide range of application areas.
to the magazine (german only)
The business magazine “StarkesLandBayern” was published as a supplement in the May issue of SPIEGEL. The May 2025 issue is dedicated to the question “Can we trust AI? Research projects caught between acceptance and reservations”.
to the research focus
research project