| News | Interview | Digital Humanism: “We do not create human beings – machines are our tools”

Digital Humanism: “We do not create human beings – machines are our tools”

Since the beginning of 2023, a new networking project of the bidt on Digital Humanism has gained momentum. We had the opportunity to interview two key figures, Professor Julian Nida-Rümelin and Klaus Staudacher, to delve into the concept of digital humanism and its relevance to ongoing discussions about human-machine interaction in Bavaria and beyond. Discover more about this important research in our interview.

What is meant by the term humanism?

Julian Nida-Rümelin: Humanism has existed as an intellectual movement since antiquity – think of Plato, Aristotle or Stoicism. Beyond Europe, Confucianism and Buddhism apply it in East and South Asia. The central question is what constitutes humanity – what is human? In this context, the historical epochs described as humanistic – such as Italian Renaissance humanism or the New Humanism of the 19th century – must be distinguished from humanism as a philosophical, ethical and political conception. In my view, at the centre of humanist philosophy and practice is the idea of human authorship. People are authors of their lives; as such, they bear responsibility and are free. In my research, I have tried to systematically reconstruct and renew the philosophical substance of humanism (Nida-Rümelin 2016; 2021; 2023).

To what extent does the principle of human authorship find its way into our legal system?

Julian Nida-Rümelin: The legal systems of the liberal Western states have a humanistic foundation. For example, the Universal Declaration of Human Rights of 1948 enshrines norms and criteria that are supposed to guarantee human authorship. The German Basic Law presents a closely related version of these norms in its first 19 articles. People should be able to shape their lives according to their ideas if they do not hinder others. However, political framework conditions are also needed to give everyone their life authorship. This includes an advanced education and a welfare state.

And how are humanism and digitalisation connected?

Klaus Staudacher: Digital Humanism (Nida-Rümelin/Weidenfeld 2022) sees itself as an ethic for the digital age that interprets and shapes the process of digital transformation according to the core ideas of humanist philosophy and practice. One message is central to this: nothing fundamental changes in the “human condition”. We are not creating a new human being through the use of technology. We use technological tools for human purposes. We remain the same in fundamental patterns, including taking responsibility for our actions.

Can’t machines also take responsibility?

Julian Nida-Rümelin: In fact, humans bear the responsibility – and alone. We do not create persons; machines are our tools. Machines do not understand the meaning of concepts. Instead, they reproduce them based on the data you give them. We are all currently extremely impressed with ChatGPT. But the system doesn’t “understand” anything; instead, it pulls information from sources as a plagiarism machine.

Does that mean no humanisation of machines?

Julian Nida-Rümelin: Exactly! Historically, the dominant worldview has been that the inanimate is animate. An example: In ancient times, when lightning struck somewhere, it was understood as punishment from Zeus. Today, people react in a similar way to human-like robots. On the one hand, the resemblance to humans creates trust, but on the other hand, it is perhaps accompanied by something sinister and the question: does the machine become a person with all its rights? Digital Humanism contradicts this view and positions itself here – so no animism, no mystification of things.

But don’t machines sometimes decide, after all?

Klaus Staudacher: Machines don’t decide anything. In the case of “real” decisions, the result is not already known in advance because otherwise, there would be nothing to decide. With algorithms, on the other hand, the rules according to which they operate have either been determined in advance by a programmer, or these rules have developed – as in machine learning – based on input-output specifications. Formulations such as “The spam filter programme has decided that this email is spam” are unproblematic if they are meant purely metaphorically.

However, we must also be aware that we are only dealing with the symbolic and metaphorical use of language here. Otherwise, we could be tempted to prematurely attribute abilities to algorithm-based AI that it does not yet have, at least not today. For example, the spam filter programme in the example sentence has not “decided” that a particular email is to be qualified as spam. Instead, it has already been decided beforehand during programming and possibly additionally through training by human actors, according to which criteria an email is assigned to the spam folder.


By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

What are decisions based on?

Julian Nida-Rümelin: Decisions are an expression of the intentions of the acting person. They bring the weighing of reasons for or against action to a conclusion and are realised later by a suitable action. For the weighing of reasons, each of which provides criteria for a good decision, there is no ethical meta-principle that takes this weighing away from us. The principle of proportionality in law has an analogy in morality. When faced with conflicts, we solve them in the sense of human authorship so that as few moral values and norms as possible have to be restricted.

Klaus Staudacher: And no machine will take this weighing off our shoulders, at least as long as artificial intelligence is based on algorithms. When we speak of authorship, it is tied to human sponsorship and personhood.

What is your motivation for implementing the networking project?

Julian Nida-Rümelin: As the originator of the term Digital Humanism, I am pleased that it has become so prominent within international discourse. It has gained tremendous momentum in the last ten years. But as it happens, the connotations often change with it. You have to be careful that the main aspects don’t get lost in the process.

Klaus Staudacher: This is precisely where we come in. We want to strengthen the position of Digital Humanism in Bavaria and Europe. We achieve this by conducting our research and providing impulses for debates. We want to bring together individual actors in Europe and thus strengthen the discourse. One of our most important goals is to establish a European Research Training Group on “Digital Humanism”.

Have there already been initial successes?

Klaus Staudacher: For example, the bidt is cooperating with the Vienna University of Technology on the “DigHum Lecture Series”. The online lecture presents interdisciplinary research positions on Digital Humanism and contributes to current developments such as ChatGPT.

Furthermore, we are involved in the open-access publication of an “Introduction to Digital Humanism”, to which the bidt also contributes several articles. In addition, together with the TU Vienna and other European cooperation partners, we recently applied for an EU Horizon Project, which is also related to the topic area of “Digital Humanism”. In the future, we want to expand international networking and cooperation with other institutions – there are still many exciting opportunities for development.

Julian Nida-Rümelin: The most recent government declaration by Bavarian Science Minister Markus Blume on April 26, 2023 states: “We need a new enlightenment: Bavaria will lead the way and take on the topic of Digital Humanism. We want to forge an alliance of competent experts and institutions in Bavaria that will jointly tackle the ethical issues of progress.” The course has thus been set.

Thank you very much for the interview!

The interview was conducted by Nadine Hildebrandt.


Nida-Rümelin, J. (2016). Humanistische Reflexionen. Berlin.

Nida-Rümelin, J. (2021). Per un nuovo umanesimo cosmpolitico. Milano.

Nida-Rümelin, J. (2022). „Über die Verwendung der Begriffe starke & schwache Intelligenz“ in: Chibanguza K. et al. (Eds.), Künstliche Intelligenz. Recht und Praxis automatisierter und autonomer Systeme, Nomos Verlagsgesellschaft, 75–90.

Nida-Rümelin, J. (2023). A Theory of Practical Reason. Basingstoke, Hampshire.

Nida-Rümelin, J./Weidenfeld, N. (2018). Digitaler Humanismus. München. (Englische Open-Access-Ausgabe 2022).

Nida-Rümelin, J./Weidenfeld, N. (2023). WAS KANN UND DARF KÜNSTLICHE INTELLIGENZ. Ein Plädoyer für Digitalen Humanismus, München. [Veränderte Neuauflage des Buchs aus dem Jahr 2018].

Nida-Rümelin, J./Staudacher, K. (2023). “Philosophical Foundations of Digital Humanism” in: Werthner et al., Introduction to Digital Humanism. A Textbook, Springer Book, Open Acces, 17-30.

Nida-Rümelin, J./Weidenfeld, N. (2018). Digitaler Humanismus. München. (Englische Open-Access-Ausgabe 2022).