What exactly does it mean to ‘embed’ ethics into technology and how can this be effectively accomplished? Here we outline our Responsible Robotics project, along with some challenges and benefits of the embedded ethics methodology.
Our project Responsible Robotics is harnessing and building upon an innovative framework for addressing social and ethical concerns that arise with new technology. Invoking an ‘Embedded Ethics’ methodology, we bring together researchers in social science and applied ethics with developers working to build robotic systems that can assist some of the most vulnerable people in society, particularly the increasing elderly populations in Germany and elsewhere in the world. As we suggest in a recent Nature Machine Intelligence article, we aim to identify and address the social and ethical aspects of these technologies from the early stages of their development, rather than waiting for novel devices to hit consumer markets and forcing regulatory bodies to rush to draft new guidelines.
While the benefits of embedding ethics into technology from the outset may be easily conceivable from a user perspective, as we explain here, embedded ethics also stands to enrich the process of development itself, thereby helping both the developers and users of technology to bring about its intended effects.
Embedded Ethics as an interdisciplinary method
As we become immersed in our partner development teams at the Technical University of Munich, our project takes on a distinctive interdisciplinary style. Researchers at the Munich Center for Technology in Society study the interplay of knowledge production, technology creation, and social and political processes, while ethicists at the Institute for History and Ethics of Medicine consider the effects of emerging technology on such topics as human agency and moral responsibility. With close collaboration with colleagues in the Munich School of Robotics and Machine Intelligence, the development of novel technological systems is merged directly with social science and applied ethics research.
Early phases of our collaborations are highlighted by in-depth peer-to-peer interviews with developers and strengthened through regular interactions, including technology integration meetings, and site visits when coronavirus guidelines allow. Building upon these interactions, we strive to fully understand the development from an engineering perspective. This helps us in developing meaningful educational materials that cohere fluidly with developers’ hopes, goals, and concerns in their everyday practices. We also work to create crucial dialogues that will bridge the developers’ work with key users, such as healthcare professionals who may soon deploy advanced medical technologies. As our perspectives and data become further refined, our project will develop and test tools for embedding ethics – such as workshops and training materials – which can assist our partner teams as well as future developers in thinking through the social and ethical questions that may arise with their work.
What exactly do we aim to embed?
It may be tempting to think that we can simply hire ethicists and plant them into technology-focused teams. This might be a step toward raising ethical awareness within development projects, establishing consultation services, or creating an alarm-system for when sensitive issues appear at risk of being overlooked.
However, it seems clear that the embedded ethics process must be more substantive and meaningful for developers, namely in order to identify and to truly address vital aspects of emerging technologies, such as user safety and product transparency. What must be embedded is a keen sensitivity within developers themselves, an ability to recognise and think through socially and ethically significant features of the technological systems being developed. This holds for all members of development teams, from those working in early concept stages, throughout implementation and testing, and even beyond integration and validation. Technology experts can be equipped with the kind of awareness that would open further discussion and prompt possible adjustments to the technological feature being considered. For example, harmful biases might be detected in the training data of an AI-enabled product. Or perhaps an auxiliary component would raise the price of a robotic system such that it becomes available only to a small subset of potential beneficiaries.
Numerous ethically important issues may well go unnoticed by ethicists hired only for brief consultations or to implement checklists. Accordingly, embedded ethics must be a thorough integration of social and ethical aspects into technology itself, through a process of meaningful collaboration with designers, developers, and potential users. Moreover, this process must continue throughout every stage of development.
What we have outlined here is no small task. Numerous hurdles already exist for the prospect of thoroughly embedding ethics into technology and it is likely that more challenges will arise. One such hurdle for the more robust framework for embedded ethics is the question of resources.
In fact, an inquiry along these lines was aptly put forward at a recent bidt Sprint Review, where one participant asked “How deeply and how frequently can we embed ethics?” Fully incorporating the relevant experts is costly and because resources are limited, we may need to find solace in lighter approaches to embedded ethics, such as external consultations.
Fortunately, however, there are funding opportunities to develop and employ embedded ethics programmes in the more robust and integrated manner. Indeed, our project is a key example of how public or private institutions can enrich technological development with an acute focus on society and moral well-being. Certainly, similar opportunities can be seen around the world, and although some ‘tech ethics’ funding schemes call for a degree of caution – namely when originating in the very sector often under ethical scrutiny – we are encouraged by the increasing attention toward AI ethics across public and private domains and by prominent appeals to fund multi-disciplinary AI research.
The future of Embedded Ethics
Another hurdle is often apparent in developers’ own desire to truly enhance and integrate their sensitivity to social and ethical aspects. Naturally, some members of a team more than others will be interested in thinking through the social and ethical dimensions of their work. For many, it is simply the practical constraints – time management or the demands of technical tasks – that determine the extent of incorporating social and ethical considerations. Still, we believe that embedded ethics programmes can enrich the development process in ways that benefit society as a whole, as well as developers themselves.
In this regard, we commend our colleagues on the technical side of development for their openness in discussing social and ethical impacts, along with their willingness to contribute to a pilot project on embedded ethics. We hope to provide further opportunities for them to briefly step-out of their ordinary work practices and to reflect upon the social and ethical aspects of their important work. By doing so, they help us to understand better the world of development, which in turn assists them and others in thinking through some of the big-picture puzzles that might arise, whether in the near- or distant-future of novel technologies.
The blogs published by the bidt represent the views of the authors; they do not reflect the position of the Institute as a whole.
Dr. Daniel Tigard
Daniel Tigard is a Senior Research Associate in the Institute for History and Ethics of Medicine at the Technical University of Munich. His published work addresses topics in ethical theory, such as moral conflicts, moral agency and responsibility, with applications to issues in bioethics. More recently, he works on ethical issues in emerging technology, particularly AI and robotics.
Svenja Breuer is a doctoral researcher at the Munich Center for Technology in Society (MCTS) at the Technical University Munich (TUM). Since 2020, she has been part of the Science and Technology Policy research group as well as the bidt Research Project Responsible Robotics and AI in Healthcare. Her PhD research focuses on ‘imaginaries’ of robotics and AI for healthcare, in both public policy and technology development practice. Previously, she completed her B.A. in Management of Social Innovation at the University of Applied Sciences Munich and her M.A. in Science and Technology Studies at TUM.
Konstantin Ritt is a research assistant at the Munich School of Robotics and Machine Intelligence (MSRM), Technical University of Munich. With a M.Sc. in Mechanical Engineering, he is now working at the intersection of engineering, social sciences and ethics. He is interested in how to bring a sociotechnical understanding into the field of engineering so as to account better for the complexities of current research directions in robotics and AI.
Maximilian Braun is a doctoral researcher at the Munich Center for Technology in Society (MCTS), Technical University of Munich. He is also an MCTS alumnus (RESET M.A.) and holds a Bachelor’s degree in Industrial Engineering. His key interest is how different forms of governance and interdisciplinary collaboration influence and affect AI research practices.