
research focus Humans and Generative Artificial Intelligence: Trust in Co-Creation
The research focus “Humans and Generative Artificial Intelligence: Trust in Co-Creation” is dedicated to the question of the conditions under which people trust or could trust in the interaction with generative AI and the resulting products – and when not.
Research focus: Generative AI
Generative artificial intelligence (AI) refers to technical systems that can autonomously create texts and multimedia content in response to specific inputs. Generative AI has the potential to transform our lives, from art and software engineering to work processes, job profiles, medicine, education and science. At the same time, generative AI presents us with a variety of challenges, for example with regard to the quality or truthfulness of the products created.
When generative AI is used as an assistance system, humans and technology have three potential points of contact: They can produce something together and create images and texts, for example. They can interact with AI and create value through the interaction itself, e.g. in therapy or teaching. When people consume or use products created by AI, the focus is on reception. In all cases, we see the concept of trust as essential for the success of these endeavours.
Announcements
Prof. Dr. Ute Schmid and Sonja Niemann at the Dresden Symposium “The Answering Machine” (25-26 March 2026)
Prof. Dr. Ute Schmid will deliver a keynote at the Dresden Symposium “The Answering Machine”, focusing on the requirements for human–AI alignment in joint decision‑making and problem‑solving processes. Her talk will highlight approaches to more human‑centered explainability as well as methods designed to strengthen human agency and oversight in collaborative AI systems.
At the interactive AI Playground, Sonja Niemann will present a prototype of a tutoring system with an LLM interface that supports students in learning programming. The prototype is being developed within the joint project pAIrProg, currently with a focus on recursion and code quality. Further information.
Prof. Dr. Marion Händel on young people’s media usage – interview with XPLR: MEDIA Magazine
In the interview with XPLR: MEDIA, Marion Händel (Professor for media-psychology at Ansbach University of Applied Sciences) explains how the media landscape is changing and how these developments affect the “Generation Digital.” She outlines why young people increasingly consume content via social media and what they expect in terms of credibility and relevance. The discussion also highlights the importance of media literacy when dealing with digital and AI‑generated information. Read interview (in German).
bidt workshop on the topic of “trust” on 3 February 2026
As part of the bidt research focus “Humans and Generative AI: Trust in Co‑Creation”, we explored in an internal workshop what “trust” actually means in the context of AI – and why interpersonal trust serves more as a heuristic starting point than a blueprint. While we can attribute intentionality, responsibility and benevolence to human actors, “trust” in AI is often reduced to the reliability of outputs and the practical act of relying on them in decision‑making. A key takeaway remains: trust should be proportionate to a system’s (perceived) trustworthiness.
More details can be found in the full LinkedIn post.
Dr. Nick Naujoks-Schober wins the first Ansbach Science Slam
At the first Science Slam at the Ansbach Kammerspiele, where researchers from Ansbach University of Applied Sciences presented their work in creative formats, Dr. Nick Naujoks‑Schober impressed with a humorous performance on learning styles with ukulele and vocals.
Read article (in German).
Latest news
Contact
Prof. Dr. Hannah Schmid-Petri
Member of bidt's Board of Directors | Chair of Science Communication, University of Passau
Internal Research Projects
Human-AI co-creation of code with different prior knowledge

The interdisciplinary project “Human-AI co-creation of code with different prior knowledge: Effects on performance and trust” explores the co-creation process of humans and AI in the context of creating programme code. The focus is on the design of trustworthy interfaces for the use of co-generators in programming training and professional software development.
Project team
Prof. Dr. Ute Schmid
Member of bidt's Board of Directors and the Executive Commitee | Member of the Bavarian AI Council | Head of Cognitive Systems Group, University of Bamberg
AI in Journalism: The Impact of Generative AI on Objectivity and Dialogic Openness in Climate Debates

The project “AI in Journalism: The Impact of Generative AI on Objectivity and Dialogic Openness in Climate Debates” is investigating the extent to which AI in the field of climate protection can help to increase the willingness to accept messages and promote factual debate with counter-arguments.
project team
Prof. Dr. Hannah Schmid-Petri
Member of bidt's Board of Directors | Chair of Science Communication, University of Passau
Legal uncertainty through generative AI? Reform considerations for the promotion of system trust at universities

In addition to the legal analysis of higher education and examination law, the project “Legal uncertainty through generative AI? Reform considerations for the promotion of system trust at universities” also addresses the question of how universities could or should react and adapt in practice.
project team
Prof. Dr. Dirk Heckmann
Member of bidt's Board of Directors | Chair of Law and Security in Digital Transformation, Technical University of Munich
In Focus
Publications
All Research Projects
Events
No entry available at this time.

