| Phenomena | Ethical decisions of autonomous vehicles

Knots in the knowledge map

Disziplin

Ethics

Ethical decisions of autonomous vehicles

Reading time: 4 min.

AI systems are becoming increasingly autonomous and are able to perform specific tasks independently and without direct human intervention. One example of this is autonomous vehicles, which can take over certain driving functions in a defined operating range without the involvement of human drivers from SAE automation level 3. Autonomous vehicles make decisions such as how much distance they keep to other road users, when they brake or, in extreme cases, with whom they collide in unavoidable accident situations. Although the introduction of autonomous vehicles can generally be expected to improve road safety, the past has shown that accidents caused by autonomous vehicles can still occur and require life-saving decisions. However, conventional decisions about maintaining the required minimum distance or the braking behaviour of an autonomous vehicle also have normative significance, as they determine how much risk is imposed on which individual. For example, if an autonomous vehicle reduces the distance to a cyclist, the probability of a collision increases and, in the event of a collision, the expected damage for this cyclist. The activities and driving behaviour of autonomous vehicles therefore involve, among other things, the adoption of complex and ethical decisions, which ultimately implicitly determine what a fair distribution of risks in road traffic means.

The question is what exactly constitutes a fair distribution of risk in road traffic and how corresponding ethical considerations can be embedded in the trajectory planning of autonomous vehicles. Are car manufacturers obliged to programme their autonomous vehicles in such a way that the safety of passengers is prioritised? Shouldn’t vulnerable groups such as pedestrians and cyclists be given special protection first and foremost? Is it permissible to take personal characteristics such as the age of road users into account when autonomous vehicles make decisions? How can abstract values such as fairness or safety be effectively operationalised and integrated into AI systems? These are just some of the questions and challenges that the research field of machine ethics is grappling with. Companies must ultimately programme the principles and ethical considerations underlying decisions in advance to ensure that these aspects are explicitly taken into account in the decision-making of autonomous systems. The overarching goal of machine ethics is therefore, among other things, to develop AI systems that have the ability to process moral information, weigh up alternative courses of action (based on ethical principles) and make appropriate decisions. With regard to autonomous vehicles, ethical guidelines and laws (e.g. BMJ 2023; Lütge 2017) have already been passed that stipulate the protection of third parties and vulnerable road users and prohibit decisions being made on the basis of personal characteristics. Guidelines of this kind and initiatives in the field of machine ethics can help to shape the development and programming of autonomous vehicles in such a way that they are geared towards social needs and values.

Comparability with analogue phenomena

In conventional, non-automated road traffic, people take over the driving functions. They make individual decisions – embedded in the provisions of the Road Traffic Act – regarding the distance to other road users, their speed and their braking behaviour. They act instinctively, adapting their driving behaviour spontaneously to the road situation at hand and many decision-making processes – especially in accident situations – take place quickly and subconsciously. In the context of autonomous vehicles, it is both possible and necessary to define decision-making processes and principles in advance. Automated road traffic therefore does not involve intuitive, ad hoc decisions by an individual driver, but rather pre-programmed, well-considered decision-making processes that must be defined ex ante and incorporated into the vehicle. In order to be able to select the optimal trajectory at all, autonomous vehicles must first identify and analyse their environment and all alternative courses of action. This requires the collection and processing of a lot of data, such as physical obstacles, driving behaviour, speed or the exact location of other road users. A decisive factor for the effective and safe implementation of autonomous vehicles and their decisions in road traffic is therefore connected mobility, which enables vehicles to communicate with each other and exchange data in real time.

Social relevance

The pre-programmed decision-making processes of autonomous vehicles have far-reaching social consequences compared to the decisions of individual human drivers, which only affect the immediate environment, e.g. in terms of fairness and safety aspects. The sphere of influence of decisions made by autonomous vehicles is therefore considerably larger. If (similarly programmed) autonomous vehicles become increasingly widespread, “automated morality” will therefore scale more quickly in contrast to non-automated road traffic, where this is not the case. In view of this, it is appropriate to reflect in detail on the specific (ethical) principles underlying the decision-making processes of autonomous vehicles.

Sources