In her opening speech at the “bidt Conference” 2025, bidt Director Professor Hannah Schmid-Petri highlighted how the use of AI systems can result in both dependency and trust: For example, if you ask ChatGPT for the latest news, you are dependent on information about an event that you have not usually experienced yourself. In addition, it is usually difficult to understand or control what is happening in the AI system – but at the same time there is a desire to be able to trust the result.
From a normative or societal perspective, it would of course be highly desirable for us to achieve all of this: to design AI systems and the framework conditions in such a way that people are empowered to develop an appropriate level of trust – trust that is neither naive nor distrustful, but appropriate.
Prof. Dr. Hannah Schmid-Petri To the profile
According to Schmid-Petri, head of the research focus “Humans and Generative Artificial Intelligence: Trust in Co-Creation” at bidt. Blind trust in AI is of course dangerous. “But groundless mistrust is just as dangerous, as it can hinder technical progress or slow down innovation,” she emphasises.
To the research focus
Federal Research Minister Bär: Promoting understanding for digital change
We need trust in new technologies, trust in secure communication, and trust that information is based on facts and is not fake.
Dorothee Bär, MdB
Dorothee Bär, Federal Minister of Research, Technology and Space, said in her video greeting to the conference. Bär emphasised how important it is for the bidt to address the major issues surrounding digital change and promote understanding of them. This strengthens the chances of innovation and is aimed in the same direction as Germany’s high-tech agenda. Trust is also created by the fact that technical progress has “clear guard rails” and ethical dimensions are considered. “Progress is never just technical,” said the Minister.

Bavaria’s Science Minister Blume: Adopting new technologies through action
In his opening speech, Bavarian Science Minister Markus Blume made it clear that, in his view, the term “digital change” did not adequately describe the dynamics of development; instead, he spoke of the “brutality of change in our digital age”. Trust is necessary – it is fundamental for a stable democracy. He is convinced that trust in new technologies arises through our own actions and endeavours.
It's like driving a car: when you're behind the wheel yourself, you have a different sense of control and safety than when you're just sitting passively in the passenger seat. Only when we understand and master technologies can we develop a fundamental sense of trust.
Markus Blume, MdL
Blume emphasised that the state, science and companies must pull together and look beyond their own sphere of influence – a claim that Bavaria is pursuing with its high-tech agenda and which he also sees in bidt: “It is our digital mastermind, it thinks in an interdisciplinary and networked way – this is where philosopher meets programmer, ethicist meets developer. Thank you very much for that”.

bidt Director Pretschner: Socially relevant in digital transformation
Professor Alexander Pretschner, spokesperson for the bidt Board of Directors, emphasised the importance of design in the digital transformation:
Digitalisation is only successful when new processes are defined based on new data in new business models and new areas of application.
Prof. Dr. Alexander Pretschner To the profile
Instead of merely digitalising existing processes, it is about transformation. Trust must therefore be created in contexts “that we do not yet know”. The bidt, which as a research institute creates an understanding of the changes that are currently taking place and wants to contribute to making the debate more objective, is also helping to shape digitalisation in practice. One example of this is the AI assistance system OneTutor, which students can use to anonymously ask questions about the content of a lecture and quiz themselves. This is also a feedback channel for lecturers to better recognise and respond to students’ needs. More than 16,000 students are now using the AI platform, the effectiveness of which is being analysed at bidt.
About the research Project
The bidt digital barometer 2025
A comprehensive study on digital change is the “Digital Barometer” regularly conducted by the bidt. In the presentation of the new results, study director Dr Roland A. Stürz began by emphasising the pronounced openness to technology among the population in Germany. At the same time, the study reveals clear differences in skills – depending on age, education and income. The report thus paints a differentiated picture of the digital transformation and provides starting points for customised support and further education programmes.
Click on the button to load the content from www.linkedin.com.
Nida-Rümelin on AI: humans remain responsible
bidt Director Professor Julian Nida-Rümelin clarified the concept of trust from a philosophical perspective, worked out the difference between people and AI systems and came to the conclusion that if AI does not have personal characteristics, genuine trust in AI is not justified. According to him, seeing AI as a cooperation partner would also be problematic – because what would follow from this? Equal rights or even a right to life? Ultimately, humans remain responsible and should use AI as a tool for human purposes.
What remains? The technological, economic, political and cultural responsibility of humans to develop and use artificial intelligence in such a way that its behaviour does not jeopardise certain fundamental human values, such as informational self-determination, personal autonomy, the ability to cooperate with others, equal treatment, consideration and, perhaps most importantly, the ability to shape human living conditions using these highly complex, dynamic and fascinating tools – digital technology in general and artificial intelligence in particular.
Prof. Dr. Dr. h.c. Julian Nida-Rümelin To the profile
The philosopher places demands on the “artificial intelligence” tool such as predictability, transparency, privacy, interpretability, reliability, security and safety – without mysticism, without personalising technical products, while recognising that humans alone ultimately bear responsibility. “Without this conviction, I am sceptical that this technological dynamic will end well,” said Nida-Rümelin. “We must not disempower ourselves.”
From the judiciary to the advertising industry: effects on society and work
Other presentations and discussion panels discussed the use and challenges posed by AI in areas of society such as justice, medicine, media, education and working life, including the creative industry. The justice system proved to be a particularly complex and sensitive area. “AI in the judiciary should be used with caution,” said Dr Anke Morsch, President of the Saarland Fiscal Court and Chair of the German IT Court Conference, in the panel on the first evening of the conference. In the judiciary, the use of AI is being examined and is already being trialled in some cases – for example to structure files or for text modules. Overall, however, this is being done with great caution, as reliability and accuracy have the highest priority in this area. Ultimately, it is the human being who decides.
Participants from the publishing and creative industries discussed in very concrete and practical terms how AI is changing their work. Verlag Oberfranken, which prioritises content accuracy and customer trust, uses AI tools “primarily to support production”, as CEO Eva-Maria Bauch said. This frees up more time for creativity.
Götz Ulmer from the advertising agency David Martin noted that AI-generated images have a similar look and that creativity involves being different. The task is therefore: “How do I tell a story in such an exciting and new way that it is relevant again?” Ulmer conceded: “Most of the time, people just want the average – and the machine will be able to deliver that.”
Burchardt: Social change towards more humanity?
“Do we want to trust AI? Or do we actually want to control AI?” asked Dr Aljoscha Burchardt from the German Research Center for Artificial Intelligence (DFKI). For Burchardt, the word “trust” doesn’t really apply to interaction with AI. Instead of humans versus AI, he focusses on humans and AI together as a “socio-technical system”, for example a doctor and a machine.
For me, the socio-technical system is the bigger question than trustworthy AI – trustworthy ecosystem, if you will.
Dr. Aljoscha Burchardt
Furthermore, at the crossroads of how to live with or without AI, he developed the “socially romantic” idea of leaving the hamster wheel of everyday life behind, reflecting on values and finding more humanity in society.
Saving the best for last: the bidt conference in memes
Journalist Dirk von Gehlen (SZ Institute) and lawyer Fay Carathanassis (bidt) concluded the two days of the conference with a “meme collage”. Memes are shared images, GIFs or short videos with brief text that humorously summarise moods and create points of reference – von Gehlen described them as “the catchy tunes of the internet”. Using motifs familiar from meme culture, the two speakers summarised the organisation and content of the “bidt Conference” 2025 in line with the leitmotif “Really? Trust in digital change”. An entertaining conclusion that showed how digital culture makes complex topics accessible and can create trust through shared codes.
Conclusion: “Real?! Trust in the digital transformation”
The “bidt Conference” 2025 presented a broad spectrum of formats and perspectives – from scientific keynotes and panels to practical impulses and an interactive supporting programme. Across all contributions, it became clear that artificial intelligence is not an end in itself, but a tool for co-creation. It expands the scope for action, but does not replace the professional judgement, responsibility and creative power of humans. Trust in AI arises where transparency, governance and competences come together. The conference thus highlighted specific guidelines: Successful digital transformation succeeds when people and AI work and design together – with clear goals, reliable framework conditions and a culture of reflective application.
Selected highlights
Research focus
Picture gallery


































© bidt/Klaus D. Wolf




