| Publications | Working Paper | Philosophical Reflections on AI Responsibility: A Rejection of the Concept of the E-Person

Philosophical Reflections on AI Responsibility: A Rejection of the Concept of the E-Person

Klaus Staudacher bidt
Prof. Dr. Dr. h.c. Julian Nida-Rümelin Professor emeritus of Philosophy and Political Theory | Ludwig-Maximilians-Universität in Munich

Who is responsible if damage occurs through the use of artificial intelligence? Can machines take responsibility at all? This working paper aims to show that responsibility requires a degree of reason, freedom and autonomy that even complex AI systems will not have in the foreseeable future.

The behaviour of machines cannot be comprehensively predicted and controlled. In the event of damage, it may not be possible to determine which of the human actors involved made a mistake. This raises the question of whether in such cases the damage should not also be attributed to the machine itself, for example by introducing the legal category of an “electronic person”.

However, there are fundamental reservations against the concept of an e-person and more generally against such a way of attributing responsibility.

The working paper presents its own conception. In addition, it deals with some aspects of the position of Matthias, who argues for the responsibility of certain forms of AI in his book “Automata as Bearers of Rights”.

The main points in brief

Against the background of ever greater progress in the design and development of complex AI systems, the question has been raised for some time by both lawyers and philosophers as to the degree of autonomy or at least independence at which machines can or should be held morally and, above all, legally responsible for their behaviour. While some contributions on this topic explicitly refer to science fiction scenarios (known from literature or film), other authors discuss the legal status of AI that already exists now or will exist in the future with regard to their civil and even criminal liability. Furthermore, with regard to machines whose behaviour cannot be comprehensively predicted and controlled, the problem of the so-called “responsibility gap” is also discussed.

Such a gap can occur in the use of complex AI systems if, in the event of damage, it is not possible to clarify which of the human actors involved (researchers, designers, programmers, trainers, operators) made a mistake. The question therefore arises whether in such cases the damage should not also be attributed to the machine itself. In this sense, the European Parliament has called for the introduction of the legal category of an “electronic person”, at least in relation to the “most sophisticated autonomous robots”. This would be “responsible for compensating any damage caused by it”, with the settlement of damages made possible by a liability fund to be endowed by “manufacturers, programmers, owners and users”. Even if a suitable insurance system were to ensure that victims could be adequately compensated in this way, there are fundamental reservations about the construct of an e-person and, more generally, about this kind of attribution of responsibility.

As this paper shows, responsibility presupposes a degree of reason and freedom and, related to this, also of autonomy, which even complex AI systems will not have in the foreseeable future. At the same time, it should be made clear that gloomy-pessimistic ‘terminator forecasts’ regarding the dangers posed by AI are unfounded. This does not mean that the use of AI and, more generally, the advancing process of digitalisation do not harbour risks; but these do not consist in the fact that machines could aspire to dominate or even destroy humanity.