| News | Research focus on Generative AI | Building trust, bridging perspectives: Retreat of the research focus group “Generative AI: Trust in Co-Creation”

Building trust, bridging perspectives: Retreat of the research focus group “Generative AI: Trust in Co-Creation”

How can trust in generative AI be explored through interdisciplinary research? At the closed meeting of the bidt research focus “Generative AI: Trust in Co-Creation”, all ten projects discussed their latest interim findings as well as further joint projects and activities.

© bidt

How can trust in generative AI be defined in a way that remains accessible to different disciplines? And what common research questions arise when projects from communication studies, computer science, legal science, psychology and business informatics come together? This interdisciplinary exchange was at the heart of the closed meeting of the bidt research focus “Generative AI: Trust in Co-Creation” on 16th and 17th March 2026. All ten projects within the research focus came together to refine their perspectives on trust in human-AI interaction, discuss their latest findings and future plans, and jointly develop the next steps. The research focus examines, from various disciplinary perspectives, under what conditions people can trust generative AI and the resulting products – and when they cannot.

Refining concepts, connecting perspectives

Following the welcome address by the research coordinators Dr. Maria Staudte and Dr. Niina Zuber, the discussion initially focused on the concept of trust, which is relevant across all projects. The starting point was the question of what is actually meant when researchers speak of “trust”. During the discussion, it became clear that trust is understood in different ways: as a mental state, as a relational and normative practice, as an existential-ontological orientation, or as trust in institutions. This conceptual clarification forms the basis for a shared understanding of trust in human-AI co-creation. It builds on the work already carried out within the research focus on a common, interdisciplinary trust model.

Striking the right balance between trust and trustworthiness

The discussion then turned to the issue of “trust calibration”: when and to what extent is it appropriate to place trust in generative AI? The “right” level of trust in specific contexts is key here: excessive trust can be risky, as it may lead people to rely too hastily on systems and their outputs. Too little trust, on the other hand, can hinder or prevent the meaningful use of generative AI. The consensus was that trust should therefore be proportionate to a system’s actual reliability and the potential risks associated with its use. The conference provided a space not only to explore these considerations in depth from a theoretical perspective, but also to reflect them in the very diverse research topics of the projects.

bidt-Research Coordinators Dr. Maria Staudte and Dr. Niina Zuber (f.l.t.r.).

A range of perspectives on trust in generative AI

The bulk of the closed-door meeting was therefore devoted to project presentations. All ten projects presented their current findings, outlined the next milestones and discussed outstanding issues. It was precisely this overview that highlighted the breadth of the research focus: The projects range from AI in journalism and political competition, through issues of higher education law, programme code co-creation and specification-driven software development, to self-regulated learning, algorithmic biases, questions of justice in the legal system, trustworthy AI co-pilots for data-driven decisions, and generative AI as a design tool. It thus became clear that trust in generative AI is not an abstract, isolated topic, but takes different concrete forms depending on the application, usage context and institutional framework.

Interdisciplinary Game.

Shared perspectives for the bidt Digital Transformation Research Conference

At the same time, attention turned to the inaugural bidt Digital Transformation Research Conference, due to take place on 19th and 20th November. Initial ideas for potential joint sub-sessions highlighted how topics from the research focus area could also be integrated into the broader research debate within the bidt.

The retreat highlighted what sets this research focus apart: the close integration of different disciplinary perspectives in addressing a shared research question of great societal relevance. It became particularly clear during discussions on concepts, methods and contexts of application that trust in generative AI can only be properly investigated if technical, social, legal and normative dimensions are considered together. The meeting provided an important forum for this and, at the same time, provided impetus for further collaboration within the research focus.

Gallery

Day 1 of the closed meeting at the northern wing inside of the bidt
Research Coordinator Dr. Niina Zuber.
Interdisciplinary Game.
Interdisciplinary Game.
Interdisciplinary Game.
Interdisciplinary Game.
Day 2 of the meeting at the BAdW.