| Kantenartikel | Algorithm Aversion

Knots in the knowledge map


Business informatics

Algorithm Aversion

Reading time: 6 min.

The concept of algorithm aversion was first introduced in 2015 by Dietvorst et al. [1] named. The authors conducted a laboratory experiment in which subjects were asked to assess the likelihood of success of applicants for an MBA programme. Before the test subjects finalised their own predictions, some were given access to the predictions of a human expert, an algorithmic agent or the predictions of both. The experiment shows: In particular, when people can observe the algorithm at work and therefore also when it makes a mistake or two, they tended to rely more on human expertise – even though the algorithmic agent displayed significantly better forecasting performance overall than the human expert. These findings laid the foundation for the first definition of the phenomenon of algorithm aversion.

While these early definitions of algorithm aversion were mostly related to the (incorrect) behaviour of an algorithmic agent, further studies showed that humans do not necessarily have to witness the performance of this agent in order to be averse to it [2]. This further emphasises the inherent irrationality of this phenomenon. This quickly led to a broader definition of algorithm aversion, which is generally used in research today. According to this definition, algorithm aversion describes a “biased evaluation of an algorithm that manifests itself in negative behaviour and attitudes towards the algorithm in comparison to a human agent” [3, p. 4]. In this broader definition, algorithm aversion does not only refer to the area of predictions, but can also be reflected in the evaluation of the algorithm or its results as well as in the general use of an algorithmic agent. An example: If an algorithmic agent makes a mistake in a personnel or lending decision, the associated negative emotions are more pronounced than when a human colleague makes a mistake [4].

Research has already demonstrated the phenomenon of algorithm aversion in many different fields of application. An irrational aversion to algorithms can be seen, for example, in medicine, where algorithmic systems are used as decision support or even decision makers in diagnostics [2]. Aversion is also evident in the delegation of business decisions to algorithmic agents, for example with regard to making personnel decisions, such as invitations to job interviews [4]. Algorithm aversion can also manifest itself in the area of algorithmically created products, such as algorithmically created music or art, or towards algorithmic agents in the area of customer service, such as chatbots [5]. In particular, the creation of digital products by algorithmic systems stands out as having significant efficiency potential, as these systems can take over the entire value creation and sales process.

The increasing diffusion of algorithmic systems into the digital economy and the associated efficiency potential for companies means that algorithm aversion can be firmly anchored there. However, it is also important to emphasise that algorithm aversion also occurs outside of an economic framework, for example in the context of legal decisions [3]. Nevertheless, the use of algorithmic systems in the digital economy is extremely important and will continue to increase significantly in the future, making algorithm aversion particularly relevant in this area. It is also interesting to note that an aversion to algorithms cannot be recognised in all cases and the effect can even be the opposite: For example, some studies describe people preferring algorithmic agents over other people [6]. This is called algorithm appreciation. Algorithm appreciation usually occurs when the underlying task is non-judgemental in nature. This is based on the fact that humans regard machines as particularly capable for objective tasks, but less so for subjective tasks [7]. Overall, there are currently indications that the underlying circumstances (e.g. characteristics of the algorithmic system or the user) have a significant influence on the development of algorithm aversion [3]. Nevertheless, research is still ongoing to better understand the development of algorithm aversion and the influences on it. For example, a recent study suggests that algorithm aversion does not arise automatically, but is rather based on a (rational!) cost-benefit analysis [8].

Comparability with analogue phenomena

Information systems and related disciplines have long been concerned with the acceptance of new digital solutions. Numerous studies have been conducted in the field of technology acceptance. These studies are based on a singular, i.e. stand-alone consideration of a solution. Research on algorithm aversion deliberately takes a different approach by focussing on comparative scenarios (i.e. human vs. algorithm) [3]. This comparative approach emerged against the backdrop of the increasing diffusion of artificial intelligence (AI), which allows systems to take over tasks from humans much more than before. Studies on algorithm aversion therefore always focus on the trade-off between a human and an algorithmic agent, as they compare how the interaction of a person with a human differs from the interaction with algorithms.

Since a digital solution is a mandatory part of the phenomenon and no comparable analogue phenomenon can be observed, the digital specificity of the phenomenon must be classified as very high. In addition, only their digital, intelligent capabilities allow the systems to take over tasks previously performed by humans in the first place. The need for intelligence in algorithmic systems thus fundamentally distinguishes this phenomenon from analogue phenomena. The enabler of the phenomenon of algorithm aversion is therefore also the automation and autonomisation of processes, as algorithm aversion only occurs when humans witness algorithmic systems taking over tasks (both of an advisory or executive nature).

Social relevance

Society has often been confronted with algorithmic systems in recent decades, which means that the phenomenon of algorithm aversion has been known for some time.

Current technological progress makes algorithm aversion particularly relevant for society. In the past, society was mostly confronted with algorithmic systems that made passive recommendations. For example, research into algorithm aversion has often focussed on the fields of medicine (e.g. decision support for diagnostics) or finance (e.g. decision support for lending). However, the relevance of algorithm aversion for society as a whole will continue to increase as technological progress continues apace: In perspective, especially against the backdrop of current technological developments in the field of non-linear machine learning models, more and also increasingly performative systems will enter our everyday lives. These advanced systems can perform tasks independently thanks to their higher degree of autonomy. This means that people not only use these systems, but also delegate tasks to them. Examples of this include voice assistants in customer service or algorithmic agents that carry out medical triage without human intervention. In the future, we will therefore be increasingly confronted with algorithmic agents, with many other fields of application opening up as technology advances (e.g. the creation of journalistic articles or the design of fashion).

Higher degrees of autonomy and thus the ability of algorithms to act can deepen the aversion to these algorithmic agents [2], [3]. The extent and frequency of the occurrence of algorithm aversion is therefore also likely to increase. Whether algorithm aversion as a social phenomenon can be reduced over time with increasing confrontation with these systems is an important field for future research.