| News | Press release | Trust and AI – bidt launches new research focus and funds twelve innovative projects

Trust and AI – bidt launches new research focus and funds twelve innovative projects

Record funding at the bidt: a new research focus and twelve innovative research projects will be launched in 2025 - more than ever before. The Bavarian Research Institute for Digital Transformation of the Bavarian Academy of Sciences and Humanities is sharpening its profile with the new research focus “Humans and Generative AI: Trust in Co-Creation”. Three internal and seven external research projects are focussing on exciting questions relating to trust and AI. In addition, two new consortium projects are starting their work.

© ISF München

Generative AI is revolutionising areas of life such as education, work and science, but also brings with it challenges such as guaranteeing quality and truthfulness. With the new research focus, the institute is addressing the question of when and how much trust is appropriate in relation to generative AI, be it in the creation, interaction or evaluation of AI-generated products. From 2025, the bidt will be specifically funding projects that examine this topic from different angles.

New bidt research focus under the leadership of Prof Dr Hannah Schmid-Petri

The new research focus “Humans and Generative AI: Trust in Co-Creation” will be headed by bidt Director Professor Hannah Schmid-Petri, who holds the Chair of Science Communication at the University of Passau.

Trust is the linchpin for the successful use of generative AI. With our new bidt focus area, we want to find out how we can shape the interaction between humans and AI in such a way that it benefits people and society in the long term.

Prof. Dr. Hannah Schmid-Petri To the profile

Schmid-Petri is also researching her own project on trust and AI. The three internal projects started in 2024. The research projects at a glance:

  • “AI in journalism: The influence of generative AI on objectivity and willingness to engage in dialogue in the debate on climate protection” | Headed by Prof. Dr Hannah Schmid-Petri (bidt / University of Passau) | The project investigates the extent to which AI in journalistic reports can contribute to increasing the willingness to accept messages and promote factual debate with counter-arguments. About the project
  • “Human-AI co-creation of programme code with different prior knowledge – effects on performance and trust” | Head: Prof. Dr. Ute Schmid (bidt / University of Bamberg) | This project investigates the co-creation process of humans and AI in the context of creating programme code by combining AI methods and approaches from experimental cognitive research. About the project
  • “Legal uncertainty through generative AI? Reform considerations for the promotion of system trust at universities” | Head: Prof. Dr Dirk Heckmann (bidt / TU Munich) | In addition to the legal analysis in higher education and examination law, the project will also investigate how universities could react to the use of generative AI in examination practice and where they should adapt. About the project

From AI in elections to AI-aided design – funding for seven research projects from across Bavaria

The seven external research projects were selected as part of the bidt annual call for proposals and will start in the first quarter of 2025. They are all based at Bavarian universities and research institutions and have prevailed in a competitive process due to their excellent quality, topicality and relevance.

We were overwhelmed by the huge response to our annual call for proposals. It shows that our research topic of trust and AI is in tune with the times. We would like to congratulate the projects that stood out from a pool of 147 applications and hope that they will provide valuable impetus and recommendations for action for a digital future that is orientated towards the common good.

Dr. Christoph Egle To the profile

These are the external research projects:

  • “Generative Artificial Intelligence in Elections: Uses, Preferences, and Trust” | Prof. Dr Andreas Jungherr (University of Bamberg) | The researchers are investigating how German political parties use generative AI and what influence this has on election campaigns and public trust. The project is thus located at the interface of AI, democracy and political competition. About the project
  • “Self-regulated and competent interaction with generative AI: Diagnostics and support” | Prof. Dr Marion Händel (Ansbach University of Applied Sciences) | The project deals with students’ interaction with AI and sheds light on self-regulated learning. It aims to develop diagnostic procedures and support measures that contribute to a reflective approach to generative AI. About the project
  • “Algorithmic representation distortions from the user’s perspective: evaluation, effects, interventions” | Prof. Dr Markus Appel (University of Würzburg) | Texts and images created with generative AI often present women and minorities in an unbalanced way. The aim of the project is to better understand this phenomenon, its effects and possible countermeasures. About the project
  • “How “human” must justice be? Psychological determinants of trust in co-creation in the context of the legal system” | Prof. Dr. Friederike Funk (LMU Munich) | The project analyses psychological processes that promote or inhibit trust in cooperation with generative AI. The focus is on judicial judgement, the reliability of witness statements and the identification of suspects. About the project
  • “Trustworthy Generative AI Copilots for Data Analytics in Business Decision-Making” | Prof Dr Ulrich Gnewuch (University of Passau) | How do users make decisions when interacting with an AI copilot? The project investigates the design of trustworthy generative AI copilots and their influence on the business decisions of users without a technical background. About the project
  • “AI-Aided Design: Generative AI as a Co-Creation Tool” | Prof. Dr.-Ing. Klaus Diepold (TU Munich) | This project investigates how generative AI can be used to support designers in the design of 3D objects. The aim is a co-creative process between software and humans. To the project. About the project
  • “Human-Centered Specification-Driven Software Engineering with Generative AI” | Prof. Dr Albrecht Schmidt (LMU Munich) | The project aims to strengthen the digital sovereignty of individuals and society. It is based on a specification-driven approach to software development without programming skills and is intended to enable broader access to the design of digital technologies. About the project

Two further consortium projects on deepfakes in law enforcement and authoritarian AI

In addition to the projects of the research focus area, two further consortium projects from Bavaria were selected as part of the annual call for proposals, which will start in 2025. The projects will strengthen existing fields of research and expand the bidt’s portfolio of topics.

  • “For the Greater Good? Deepfakes in law enforcement” | Prof. Dr Lena Kästner (University of Bayreuth), Prof. Dr Niklas Kühl (University of Bayreuth), Prof. Dr Christian Rückert (University of Bayreuth), OStA Miriam Margerie (ZAC NRW) | The use of modern AI also holds enormous potential for law enforcement. But under what circumstances and to what extent is the use of deepfakes socially acceptable? The project involves criminal law and law enforcement, business informatics and philosophy. About the project
  • “Authoritarian AI: How large language models (LLMs) are adapted to Russia’s propaganda” | Prof Dr Florian Toepfl (University of Passau), Prof Dr Andreas Jungherr (University of Bamberg), Prof Dr Florian Lemmerich (University of Passau) | The project raises two questions: How and with what consequences are LLMs developed under strict supervision and censorship in today’s Russia? And what impact does authoritarian data have when it is fed into democratic LLM-supported systems? About the project

Contact

Press contact

Leonie Liebich

Science Communication Manager, bidt

Scientific contact

Dr. Christoph Egle

Managing Director, bidt