With the increasing use of robotic technologies in various areas of human life, psychological issues are increasingly becoming the focus of robotics research – i.e. questions about human thinking, feeling and behavior in the context of human-machine interaction [9]. For successful interaction design and the success of an introduction into social environments, it is important to understand how people experience intelligent machines: How trustworthy do we perceive a robot to be? Do we experience the use of a robot as positive or negative? What worries and hopes are associated with these technologies? Such perceptions can vary greatly depending on a variety of personal factors, the situation and the characteristics of the technology itself – i.e. how it is designed or behaves [9].
The perception of robots also depends largely on their area of application. For example, a humanoid robot with a human-like shape that is used for social interactions differs significantly from a production robot that can perform work tasks in warehousing or assembly with its driverless transport platform and gripper arm. It is therefore not only the tasks that differ, but also the shape of the robots and, ultimately, the various communication options – we therefore have a very wide range of design options for robots.
For example, the phenomenon of anthropomorphism can be observed in humanoid, i.e. visually human-like robots [1]. Research has documented that we tend to attribute human characteristics, such as a mind and will of their own [1], [3] [10], to machines in response to social cues based on their external form (e.g. elements reminiscent of eyes). However, this can also have an unpleasant effect if the degree of human likeness is too high: While we find a certain degree of human likeness appealing, we even find artificial agents that are too human-like creepy – the so-called Uncanny Valley [8]. Such aspects of anthropomorphism are not as much of a focus in industrial robotics. They are less reminiscent of social beings due to their external form, but are often equipped with various manipulators such as a gripper arm or tools for work in production and assembly, which can lead to differences in the perception of safety and stress in humans and ultimately to the desire for a greater safety distance [6], [7]. In addition to the different design, the social environment in the work context also plays a role. If such a machine is newly introduced to workplaces, this entails change processes in the company (changes); for example, certain tasks that the robot now performs may be eliminated and new tasks added (e.g. maintenance work). Such changes are associated with various concerns and fears of the respective employees, as shown in a survey study of over 500 people by Leichtmann et al. (2023) [5]. For example, robots can be perceived as negative due to concerns that they will take away interesting areas of work. Other reservations could be social effects such as job loss or lower appreciation of human work. Whether a robot is perceived as a positive or negative change also depends on the corporate culture and how managers deal with the change [5].
Such effects on people and social coexistence therefore also have ethical implications for the design, implementation, dissemination and use of such technologies. Depending on its design, a robot can inspire greater trust. However, the aim is not to maximize trust in robots, but rather to establish adequate trust from an ethical perspective. For example, people should not trust an industrial robot so much that they neglect important safety precautions. The aforementioned social effects and concerns of people should also be taken into account when introducing robots [4], [5]. A human-centered approach should be taken when designing the collaboration between humans and robots. For example, work with the robot should be designed in such a way that humans still experience their work as valuable. This can be achieved if, when interesting work tasks are no longer performed, the human does not just serve as a “gap filler” for the robot, but new valuable tasks are added.
Comparability with analogue phenomena
We are also familiar with many of the perceptual phenomena described here from other areas without intelligent machines. For example, the phenomenon of anthropomorphism – when human characteristics are attributed to non-human entities – can also be found in other areas. There are studies on differences in the attribution of human characteristics to animals or nature and, for example, the associated effects on our environmental behavior [2]. One difference with robots and intelligent machines in general, however, is the scope for design. Anthropomorphism depends, among other things, on how the machine is designed and behaves. The increased changeability of technologies can also change our perceptions. This freedom of design opens up a range of possibilities, but also challenges, as it can significantly influence interaction and trust in the technology.
Change processes are nothing new in the work context either – for example, measures to change any company can always give rise to concerns among the workforce regarding task-related and social consequences. These change processes (e.g. restructuring of departments) are often associated with uncertainties and fears. However, intelligent machines could have long-term and far-reaching social effects through automation processes that go beyond short-term or local economic fluctuations and could mean a permanent change to entire sectors of work.
A further comparison can be drawn with simpler earlier technologies, such as industrialization. These also led to far-reaching social changes and adaptation processes. However, in contrast to these historical developments, intelligent machines and robots offer a higher degree of flexibility and adaptability, which presents both opportunities and risks. However, the ability to design robots to accommodate specific human needs and forms of interaction also requires careful ethical evaluation.
Social relevance
Intelligent machines will be integrated even more deeply and into other areas of our social life. This will have different psychological effects. However, it is important to emphasize that we as a society have a whole range of scope for shaping human-machine interactions. This article has briefly outlined how the design of robots, for example, can affect our perception and trust. However, it is not only the technical design that is important, but also how we implement the introduction of such technologies into social systems. How we design interactions with new technologies and their implementation so that they comply with ethical principles will be a key task for the future.
Further links and literature
Sources
- Epley, N. et. al. (2008). When We Need A Human: Motivational Determinants of Anthropomorphism. In: Social Cognition 26 (2), 143–155.
- Epley, N./Waytz, A./Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. In: Psychological Review 114 (4), 864–886.
- Gray, H. M./Gray, K./Wegner, D. M. (2007). Dimensions of Mind Perception. In: Science 315 (5812), 619–619.
- Hampel, N. et al. (2022). Introducing digital technologies in the factory: Determinants of blue-collar workers’ attitudes towards new robotic tools. In: Behaviour & Information Technology 41 (14), 2973–2987.
- Leichtmann, B. et al. (2023). New Short Scale to Measure Workers’ Attitudes Toward the Implementation of Cooperative Robots in Industrial Work Settings: Instrument Development and Exploration of Attitude Structure. In: International Journal of Social Robotics 15 (6), 909–930.
- Leichtmann, B. et al. (2022). Personal Space in Human-Robot Interaction at Work: Effect of Room Size and Working Memory Load. In: ACM Transactions on Human-Robot Interaction 11 (4), 1–19.
- Leichtmann, B./Nitsch, V. (2020). How much distance do humans keep toward robots? Literature review, meta-analysis, and theoretical considerations on personal space in human-robot interaction. In: Journal of Environmental Psychology 68, 101386.
- Mara, M./Appel, M./Gnambs, T. (2022). Human-Like Robots and the Uncanny Valley: A Meta-Analysis of User Responses Based on the Godspeed Scales. In: Zeitschrift für Psychologie 230 (1), 33–46.
- Mara, M./Leichtmann, B. (2021). Soziale Robotik und Roboterpsychologie: Was psychologische Forschung zur menschzentrierten Entwicklung robotischer Systeme beiträgt. In: Bendel, B. (Hg.). Soziale Roboter. Wiesbaden, 169–189).
- Waytz, A./Heafner, J./Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. In: Journal of Experimental Social Psychology 52, 113–117.