| Glossary | Hardware | Autonomous systems

Autonomous systems

Definition and differentiation

Autonomous systems are complex processes that carry out actions independently without human control. However, this is not the only characteristic. Systems that act without human control have been around for a long time, such as fire alarms or airbags. Autonomous systems add a new quality: they are able to perform tasks or missions of greater complexity. However, this complexity goes hand in hand with the fact that it is no longer possible to explicitly program the system as to how it should behave in certain situations that might occur on the way to fulfilling its task. Since situation-specific behaviour cannot be explicitly programmed in, even developers cannot always predict how the system will behave in a certain (unpredictable) situation. Nevertheless, the system should function dependably and be able to make do without human supervision.

These characteristics are reflected in many definitions.

Border Gateway Protocol

A somewhat older definition refers to the Internet: “Autonomous systems are connected to each other and thus form the Internet” [1]. However, this definition is certainly too narrow with regard to current debates.

New Machinery Ordinance

The new Machinery Ordinance defines “autonomous mobile machinery” as follows:

“‘Autonomous mobile machinery’ means mobile machinery which has an autonomous mode, in which all the essential safety functions of the mobile machinery are ensured in its travel and working operations area without permanent interaction of an operator.” [2]

The problem with this definition is that a machine would lose the “autonomous” property as soon as an essential safety function is no longer guaranteed in autonomous operation. It would be more intuitive if it would only lose the “safe” property.

ISO/IEC 22989: 2022 Artificial Intelligence Concepts and Terminology

The international standard ISO/IEC 22989 defines “autonomy” and “autonomous” as “characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight”. [3]

According to this definition, the system not only decides how it behaves in a situation in order to achieve a goal, but also which goal it pursues. This raises the question of the extent to which a system can choose its “own” goal and how this “own” goal relates to the human motivation to develop a technical (autonomous) system. The definition contradicts several other definitions that explicitly point out that the goals are predetermined.

acatech

In the final report of the acatech expert forum “Autonomous Systems” [4], “a system is only described as autonomous if it can achieve a predetermined goal independently and adapted to the situation without human control or detailed programming.”

ALFUS

The National Institute of Standards and Technology (NIST) defines autonomy in the “Autonomy Levels for Unmanned Systems” (ALFUS) framework as follows:

“An Unmanned System’s (UMS) own ability of integrated sensing, perceiving, analyzing, communicating, planning, decision-making, and acting/executing, to achieve its goals as assigned by its human operator(s) through designed Human-Robot Interface (HRI) or by another system that the UMS communicates with.” [5]

Differentiation from automation

The terms “autonomous” and “autonomicity” refer to self-governance, self-management and aim at the independence from humans in terms of control and supervision [6]. “Automated”, on the other hand, focusses on replacing (human) actions[7]. Activities that were previously carried out manually are automated by being performed by technical systems. The adjective automated therefore refers to an activity rather than a technical system. The term automated driving is therefore more appropriate than the term automated vehicle, as it is not the vehicle itself that is automated, but the activity of driving. Nevertheless, the term automated vehicles or automated systems has been used increasingly in recent years, especially when the term autonomous system seems inappropriate, as explicit programming is largely used. However, these cases could also be described with the term automation system. An automation system is a system that automates an activity like driving. One could refer to an automated driving system meaning that the system is for automated driving and not that a driving system is automated.

Differentiation from artificial intelligence

Artificial intelligence (AI) is a research field [8]. AI offers techniques and methods for developing autonomous systems. These techniques and methods include, in particular, machine learning and generally everything that forms an alternative to explicit programming.

Differentiation from cognitive systems

Autonomous systems are special cognitive systems. Cognitive systems can act more or less independently of humans and therefore have different degrees of autonomy.

Application and examples

Autonomous mobility

Autonomous systems offer great potential for mobility. There are autonomous systems that control means of transport independently, such as autonomous shuttle buses or autonomous rail vehicles. There are also autonomous systems that manage the flow of traffic by acting in the virtual world, for example by autonomously allocating current mobility demands to current mobility offers in the context of Mobility-as-a-Service. Due to the growth in electromobility, increasing consideration must also be given to how intelligent power grids can autonomously bring together energy demands and energy supplies.

Smart City

In addition to mobility and energy, there are many other relevant areas in the smart city, such as water supply and healthcare. Autonomous systems in these individual areas then form a group of (autonomous) systems. This system of systems can in turn be described as an autonomous system, as it has the essential characteristics of such a system. It acts in a situation-specific manner and exhibits emergent and therefore not explicitly programmed behaviour.

Autonomous weapon systems

Autonomous weapon systems are a field of application that is gaining in importance. These include both defence and attack systems. The difference between remote control and autonomy is not only important from a moral point of view, but also because remote control signals can be detected by the enemy.

Criticism and problems

The challenges posed by automation through autonomous systems depend on the task that is to be automated: When driving on the road, safety is a major challenge. When it comes to lending by a bank, fairness and transparency are more important. When using autonomous systems for warfare with the aim of killing people, there is the challenge of receiving international condemnation and a ban.

The last application example in particular shows that the question of whether to automate a task is initially independent of the question of how the goal of automation is achieved – with or without explicit programming. When autonomous machines kill people, it is probably only of limited importance to their relatives whether explicit or implicit programming was used. However, the victims are often interested in who is responsible and can be held accountable.

This also applies to cases where automation should not be rejected in principle, such as automated driving on the road. Who is morally responsible and who is liable if autonomous vehicles are involved in an accident? This is not just about frequently discussed exceptional situations in which the vehicle can only “choose” between different accidents. Automated driving should be designed in such a way that a vehicle cannot get into such a dilemma in the first place. It is fundamentally about driving behaviour and the associated risk [9].

Moral responsibility and liability generally depend on whether an act was wilful, grossly negligent or merely negligent. In this context, there are debates about whether the type of programming has an impact on wilfulness or negligence. In the case of machine learning, even developers do not always know exactly what the system has learnt. They may have taken into account the state of the art in AI science and technology to the best of their knowledge and belief; nevertheless, they cannot sufficiently guarantee that the system will always react as expected. The negligence is therefore not due to the implementation, but to the decision to launch a system on the market without being able to adequately justify that and why the implementation is sufficiently dependable.

Research

Research on autonomous systems can be categorised into the fields of ethics, law and technology. The large field of technology research expands and links various areas such as AI, automation technology and research into cyber-physical systems [8].

Despite immense investment in various areas, such as automated driving, the “breakthrough” is still a long time coming. There are field trials with prototypes, but the safety issue has so far remained a major stumbling block. A new field of research to address this issue is dynamic risk management [10], [11]. This involves enabling an autonomous system to assess and evaluate the risks of a possible action in a specific acute situation. This presupposes that the system correctly perceives the current situation and correctly assesses possible developments over a period of time. When perceiving the environment in particular, the system must take into account technical limitations and deficiencies – analogous to a human driver who is aware that they cannot see so well in the dark, for example, and therefore drives more slowly. In this context, it must also be taken into account that the use of data-driven models, such as deep neural networks, is subject to uncertainties.

Although it is possible to quantify this uncertainty [12], there is still a need for research into the management of these uncertainties. Detecting and handling errors during operation is a proven concept, but uncertainties relate to the probability of error in the current situation. This raises many technical questions.

Ethical issues often go hand in hand with technical issues. The general requirement that an autonomous system should act safely and fairly cannot be realised from a purely technical perspective. The ethical perspective is necessary in order to clarify what fairness actually means and which safety risks can be assessed as acceptable. The legal aspect must harmonise with the answers from the technical and ethical perspectives. The legal perspective must take into account both the current law and the necessary timeframes for changes to the law and its enforcement. In terms of enforcement, for example, it is relevant that the investigation of accidents or technical failures and the corresponding liability of operators, manufacturers and suppliers is a very complex process compared to a case in which a single driver has made a mistake. Against this background, there are proposals to introduce a new legal identity “computer driver” and to clarify liability issues as far as possible in the same way as for manual driving [13].

Sources

[1] https://de.wikipedia.org/wiki/Autonomes_System

[2] Verordnung (EU) 2023/1230 des Europäischen Parlaments und des Rates vom 14. Juni 2023 über Maschinen und zur Aufhebung der Richtlinie 2006/42/EG des Europäischen Parlaments und des Rates und der Richtlinie 73/361/EWG des Rates

[3] ISO/IEC 22989:2022, Information technology – Artificial intelligence – Artificial intelligence concepts and terminology

[4] Fachforum Autonome Systeme im Hightech-Forum: Autonome Systeme – Chancen und Risiken für Wirtschaft, Wissenschaft und Gesellschaft. Langversion, Abschlussbericht, Berlin, April 2017

[5] Huang, H. M. (Hrsg.). Autonomy Levels for Unmanned Systems (ALFUS) Framework, Volume I: Terminology, Version 2.0. Special Publication (NIST SP) – 1011, Report Number 1011

[6] Adler, R. (2019). Autonom oder vielleicht doch nur hochautomatisiert?. Fraunhofer IESE. [23.02.2024]

[7] Jürgensohn, T. et al. (2021) Rechtliche Rahmenbedingungen für die Bereitstellung autonomer und KI-Systeme. Hrsgg. von Bundesanstalt für Arbeitsschutz und Arbeitsmedizin. DOI: 10.21934/baua:bericht20210423

[8] Saidi, S. et al (2022). Autonomous Systems Design: Charting a New Discipline. In: IEEE Design & Test 39(1), 8-23. DOI: 10.1109/MDAT.2021.3128434

[9] Geisslinger, M. et al (2021). Autonomous Driving Ethics: From Trolley Problem to Ethics of Risk. Philos. Technol. 34, 1033-1055 (2021).

[10] Adler, R./Reich, J./Hawkins, R. (2023). Structuring Research Related to Dynamic Risk Management for Autonomous Systems. To be published in: Proceedings of 42nd International Conference on Computer Safety, Reliability and Security (SAFECOMP). Toulouse/France.

[11] Schneider, D. et al (2024). Dynamic Risk Management in Cyber Physical Systems [23.02.2024]

[12] DIN SPEC 92005:2024-03 (2024). Artificial Intelligence – Uncertainty quantification in machine learning. [24.03.2024]

[13] Wieden, W. H./Koopman, P. (2023). Winning the Imitation Game: Setting Safety Expectations for Automated Vehicles. In: Minnesota Journ. of Law, Science & Technology 25 (1). [23.02.2024]