Like all technologies of the past, digital technologies are ambivalent. The digital transformation will not automatically humanise our living conditions – it depends on how we use and develop this technology. Digital humanism argues in favour of an instrumental approach to digital technologies: What can be economically, socially and culturally useful, and where are potential dangers lurking? Digital humanism has a deep philosophical dimension (see Nida-Rümelin, J. (2018)), namely the special position of humans as authors of their lives guided by reasons and thus the frontline position against the extension of authorship to machines, and it sees itself as an ethic for the digital age that interprets, accompanies and shapes the process of digital transformation in accordance with the core ideas of humanist philosophy.
But what are the core ideas of humanism?
The term humanism has a wide range of meanings. When we talk about humanism here, it is not in the sense of a historical epoch, such as early Italian humanism (Petrarch), German humanism in the 15th and 16th centuries (Erasmus) and finally neo-humanism in the 19th century. It is also not a specifically Western or European cultural phenomenon, as humanist thinking and behaviour also exist in other cultures. We understand humanism to mean a certain idea of what it means to be human, combined with a practice that fulfils this humanistic ideal as far as possible. No elaborate humanistic philosophy is required to realise a humanistic practice.
At the centre of humanist philosophy and humanist practice is the idea of human authorship: people are the authors of their lives, as such they bear responsibility and are free. Freedom and responsibility are two mutually dependent aspects of human authorship. Responsibility and freedom are in turn linked to the ability to reason. Reason can be characterised as the ability to deliberate appropriately on the reasons that speak for or against certain actions, beliefs and attitudes. Freedom is then the possibility of being able to follow precisely the reasons that are deemed better in such a deliberation process. If I am free, it is therefore my reasons determined through deliberation that guide me to judge and act in one way or another. Responsibility also presupposes a certain degree of autonomy: people are not mere recipients of orders and do not simply fulfil the goals set for them externally, but are at least fundamentally capable of questioning the meaningfulness of such goals and setting themselves overriding goals due to their ability to reason. This triad of reason, freedom and responsibility spans a cluster of normative concepts that defines the humanist understanding of the human condition and has shaped both everyday morality and the legal system over centuries in a lengthy cultural process.
The core idea of humanist philosophy and practice, human authorship, can therefore be characterised by the way in which we ascribe responsibility to each other and treat each other as rational and, at least in principle, free and autonomous beings. Humanism thus clearly rejects a mechanistic paradigm according to which the human being is nothing more than a complex machine whose behaviour is completely determined. It contrasts this paradigm with the image of the human being as a fundamentally self-determined agent acting both alone and collectively. In line with this view of humanity, humanism sees it as a goal and task to promote human judgement and decision-making skills through suitable measures in order to strengthen individual and collective autonomy.
Human authorship can also be related to the principle of human dignity, which occupies a prominent position in the German Basic Law and serves as the basis for the validity of human and fundamental rights. Where people are deliberately deprived by others of any possibility of being the author of their lives, so that they are no longer capable of any autonomous, i.e. self-determined, action, their human dignity is also violated – and interventions in human and fundamental rights are always also impairments of the possibilities for the development of human authorship. The constitutional justification requirements for encroachments on fundamental rights are therefore also relevant from a humanistic perspective. In particular, utilitarian considerations based purely on optimising consequences are not compatible with a humanistic approach.
What does this understanding of humanistic philosophy mean for digital humanism, i.e. for an interpretation and design of the process of digital transformation based on humanistic principles? In ethical and philosophical terms, the following principles and demands can be formulated:
Rejection of the animistic paradigm (“machines are (like) people”)
From a humanistic perspective, the animistic paradigm should be rejected just as much as the mechanistic paradigm, because even if it cannot be ruled out that at some point in the distant future there could be the possibility that at some point in the distant future there could be AI systems that have reason, freedom and autonomy in a similar way to us humans, it remains to be it should be noted that this is definitely not yet the case with today’s AI. Even programs such as the generative AI ChatGPT or the autodidactic computer program AlphaZero, which show impressive performance, only fulfil their externally defined goals, albeit very efficiently. However, they cannot question the content of these goals or set themselves overriding goals and, unlike us humans, they do not have intentional states (propositional attitudes, such as beliefs, wishes, intentions, expectations, hopes).
Responsibility still lies with humans and not with machines
Since even complex AI systems lack the reason, freedom and autonomy required for the attribution of responsibility a concept of the e-person must be rejected, which regards machines as accountable actors that can be held responsible for their behaviour behaviour. The digital humanism holds to the human conditions of responsible human conditions of responsible practice. It calls for the Extension of the attribution of responsibility to the communications mediated by digital technologies it demands the extension of attributions of responsibility to the communications and interactions mediated by digital technologies and does not allow the actual actors (and that is us humans) to shift their responsibility to the supposed autonomy of digital machines.
No ethically relevant decisions by AI
Ethically relevant decisions, such as those that may arise in the case of autonomous driving, must never be made (solely) by algorithmically functioning AI
because
- algorithmically functioning AI cannot decide anything. In the case of “real” decisions, the outcome of the decision is not known from the outset, because otherwise there would be nothing left to decide. With algorithms, on the other hand, the rules according to which they operate have either been defined in advance by a programmer or these rules have been developed based on input-output specifications, as is the case with machine learning.
- the algorithm-inherent and consequentialist optimisation function (i.e. aimed at bringing about the best consequences in accordance with the programme specifications) is neither compatible with human dignity nor with the framework conditions set by fundamental rights in liberal constitutions.
- insofar as the approach of taking into account all relevant circumstances for each individual case in advance when programming an algorithm is pursued, this does not take adequate account of the complexity and context-sensitivity of ethical decision-making situations.
The example of autonomous driving just cited stands here only for a general problem of software-controlled behavioural programmes. It is particularly illustrative in that a large number of complex interaction situations occur under current road traffic conditions, at least in city centres. Digital humanism recommends the consistent, well-considered use of all the potential of digital technologies to improve the protection of life and health in road traffic. At the same time, however, it warns against the inhumane consequences of an optimisation calculus in which human life is pitted against human life, human life against health, health of one against
Health of the other, individual rights against individual rights.
Digital sovereignty
The concept of digital sovereignty can be applied not only to states and companies, but also to human individuals. In order to preserve and enable human authorship in the process of digital transformation, individual digital sovereignty is required. Individual digital sovereignty is protected to a certain extent by the right to informational self-determination derived by the Federal Constitutional Court from the general right of personality under Article 2 (1) of the Basic Law in conjunction with Article 1 of the Basic Law, because interventions in the right to informational self-determination are always also interventions in individual digital sovereignty. However, as this is not only about the individual’s power of disposal over their own personal data, but also generally about the self-determined use of digital applications, this also results in demands that are not covered by the right to informational self-determination.
Furthermore, individual digital sovereignty requires:
Internet access for everyone
The development of the Word Wide Web has already reached such an advanced stage for large regions of global society that exclusion from Internet communication can hinder citizens in exercising their fundamental rights. These include in particular the fundamental rights of freedom of information and of freedom of expression, freedom of assembly, freedom of association and the right to education, all of which are also protected online. The right to Internet access should therefore have a constitutionally guaranteed fundamental right status, whether as an independent fundamental right or derived from the aforementioned rights, for the online exercise of which Internet access is a necessary prerequisite.
Education: Teaching of digital skills (digital literacy)
While non-digital research is often associated with time-consuming changes of location, free Internet access enables direct access to a previously unknown wealth and variety of content (stories, theories, theses, interpretations, ideologies) in image and text form. It is a core task of the state and society to provide all citizens not only with the skills required to access this content effectively, but also with the ability to distinguish trustworthy from unreliable information and thus also with the most comprehensive knowledge possible about how content is created online.
However, the use of digital tools is not limited to the area of research, but also concerns forms of communication and participation, both with regard to purely private, personal or professional or business contacts and in the relationship between the state and citizens. Here, too, the aim must be to enable every citizen, as far as possible, to master the necessary basic digital technologies.
No compulsion for digital participation
As digital participation can make communication easier, it should initially be seen as a positive phenomenon and should be possible for anyone who wants this type of participation and involvement. However, the question is whether there should also be areas in which participation is either only possible digitally without this being essential (such as the use of social platforms) or whether digital participation would at least involve less effort and costs than corresponding non-digital forms of participation. From the perspective of digital humanism, this is problematic because it should also be part of the individual digital sovereignty of each person to be able to decide freely on the scope and type of their digital participation and to be able to do without it completely without being unfairly disadvantaged. In other words:
Whenever participation is also possible by non-digital means, this option of non-digital access should in principle continue to be open to every person. And in areas where non-digital participation is not technically feasible or no longer appears justifiable for cost reasons, general individual digital sovereignty can only be achieved if digital skills are taught to ensure that every person who is fundamentally able to do so can learn the skills required for the respective form of digital participation, regardless of their social status.
Transparent communication
The humanistic approach is also important for our communication behaviour. We act differently depending on whether we assume
that our counterpart is a machine or a human being. As a rule, we consider only humans to be fully sane and therefore assume that they can control their own behaviour intentionally and are at least fundamentally capable of understanding the content that they communicate and that is communicated to them. In contrast, virtual identities, for example in the form of chatbots, have no intentions and are controlled by algorithms. They have no intentions and can therefore neither make decisions nor communicate. We therefore have a right to know whether we are dealing with a human or a machine.
Letters that are triggered by software without any personal control and that feign information from an employee who does not even exist must therefore not be sent. Transparency includes communicating responsibilities to the outside world and ensuring reliable personal contact between companies and customers.
Democracy
E-participation (electronic participation), i.e. citizen participation in political decision-making processes through internet-based procedures, is to be expressly welcomed as a further development of traditional citizen participation procedures from the perspective of digital humanism. However, the advancing process of digitalisation also makes the realisation of a completely direct democracy, in which all political decisions are made directly by the people without delegation, appear technically and organisationally possible even in large and populous territorial states. This could tempt us to see majority decisions, which are determined in general, direct, free, equal and secret elections and votes, as the only essential feature of democratic decision-making processes. And indeed, democracy as the rule of the people also stands for the form of government in which the people rule themselves.
However, democracy – at least according to the liberal understanding that prevails in Western democracies – is much more than that. For example, the principle of the separation of powers applies. Furthermore, democracy is also the form of government that is geared towards realising human rights; as fundamental rights incorporated into the constitution, they provide the framework within which majority decisions and considerations of the common good are permissible. This human rights constitution of democracy is of outstanding importance from a humanist perspective, because, as we have already seen, human rights protect and enable the development of human authorship.
Furthermore, majority decisions taken by the legislature are often preceded by expert hearings in parliamentary committees and a process of public deliberation in which not (only) different points of view are put forward, but in which (at least also) truth claims are used to argue in favour of one’s own position.
These essential characteristics of liberal democracies must also be retained in principle when using digital participation options, as they serve the humanistic goal of strengthening the ability to make judgements and decisions and thus individual and collective autonomy. Digital information and decision-making technologies should therefore be used as a supplement to parliamentary, representative democracies based on the rule of law – but they are merely a support, not a replacement. Even if there are no experts and no definitive criteria for making the right political decisions in advance, appropriate political decisions require expertise at all levels (i.e. not only among politicians, but also among citizens who participate through elections, votes and political engagement). On the one hand, the internet can be a good source of information here. On the other hand, algorithmic processes that prioritise and favour messages regardless of their truthfulness and on the basis of an increased number of clicks and other user reactions also pose major risks for democracy. This further exacerbates the already existing danger of political decisions being made on the basis of sentiment rather than facts.
Concluding remarks
Digital humanism therefore aims to utilise the potential of digital transformation for a more humane and fairer future for humanity in line with the humanistic concept of a fundamentally autonomous and free actor. Digital applications must therefore not restrict the development possibilities of human authorship, but rather expand them and relieve people of time-consuming, purely schematic calculation and listing work. The use of digital technologies must not lead to social distortions.
From a philosophical point of view, digital humanism is directed against what can be described somewhat simplistically as Silicon Valley ideology. The key concept of this ideology is that of artificial intelligence, which is charged with implicit metaphysics and theology, a self-improving, hyper-rational, increasingly animated system whose creator is not God, but software engineers who see themselves not only as part of an industry, but of an overarching movement that is realising a digital paradise on earth. Digital humanism rejects such a “Homo Deus” conception of humans as “gods” creating individuals, as well as transhumanist utopias and neo-animist views of software agents. In contrast, he wants to contribute to a specifically European path of digitalisation that respects individual autonomy and dignity and opens up new possibilities for a self-determined life.
Research project
Literature
Nida-Rümelin, J. (2018). Humanistische Reflexionen. 2. Aufl. Berlin.
Nida-Rümelin, J. (2022).Über die Verwendung der Begriffe starke & schwache Intelligenz. In: Chibanguza, K. et al. (Hg.). Künstliche Intelligenz. Recht und Praxis automatisierter und autonomer Systeme. Baden-Baden.
Nida-Rümelin, J. (2022). Digital Humanism and the Limits of Artificial Intelligence. In: Werthner, H. et al. (Hg.). Perspectives on Digital Humanism. Open Access 71–75. https://link.springer.com/book/10.1007/978-3-030-86144-5.
Nida-Rümelin, J./Staudacher, K. (2020). Philosophische Überlegungen zur Verantwortung von KI. Eine Ablehnung des Konzepts der E-Person. bidt-Working Paper. Open Access. https://publikationen.badw.de/de/047053353/pdf/CC%20BY
Nida-Rümelin, J./Staudacher, K. (2023). Philosophical Foundations of Digital Humanism. In: Werthner, H. et al. (Ed.). Introduction to Digital Humanism. A Textbook. 17–30. Open Access https://link.springer.com/book/10.1007/978-3-031-45304-5
Nida-Rümelin, J./Weidenfeld, N. (2018). Digitaler Humanismus. Eine Ethik für das Zeitalter der künstlichen Intelligenz. München.
Nida-Rümelin, J./Weidenfeld, N. (2022). Digital Humanism. For a Humane Transformation of Democracy, Economy and Culture. Open Access. https://link.springer.com/book/10.1007/978-3-031-12482-2.
Prem, E. (2024). Principles of digital humanism: a critical post-humanist view. In: J Resp Tech 17,: https://doi.org/10.1016/j.jrt.2024.100075 (Open Access).
Werthner, H. (2023). Digital Transformation, Digital Humanism: What needs to be done. In: Werthner, H. et al. (Hg.). Introduction to Digital Humanism, A Textbook. 115–132. Open Access. https://link.springer.com/book/10.1007/978-3-031-45304-5
Werthner H. et al. (2019). Vienna Manifesto on Digital Humanism. Open Access. https://dighum.ec.tuwien.ac.at/wp-content/uploads/2019/05/manifesto.pdf [27.05.2024]
Werthner H. et al. (Hg.) (2022). Perspectives on Digital Humanism. Open Access. https://link.springer.com/book/10.1007/978-3-030-86144-5