| News | Interview | “It’s about bringing different perspectives into the development and design process of AI technology.”

“It’s about bringing different perspectives into the development and design process of AI technology.”

We spoke to Professor Ruth Müller about the development of AI technology. She advocates strengthening diversity and inclusion.

© Uli Benz / TU Muenchen

Professor Müller, why is the exchange between different disciplines important at events like the For..Net Symposium on Digital Transformation?

We need to keep the dialogue going on as many levels as possible – whether on the topic of administration, education or health. The individual areas each have their own special challenges to overcome in the digital transformation. For example, our research subject in the “Responsible Robotics” project differs from that of a research team investigating the use of AI technology in public administration. Nevertheless, the question of social justice, for example, is very central to the topic of digitalisation and is not limited to the health sector.

To what extent do ethical and social aspects play a central role?

The questions we ask ourselves here are: For example, who do engineers or computer scientists have in mind when developing and designing new AI technologies – whether for e-government or in the health sector? Are these technologies standardised on specific people and what happens to those who fall outside this system? AI technologies have enormous potential to change things for the better. At the same time, there is a danger that they become discrimination tools. It must not happen that people who do not fit the norm – however this is defined – fall out of the system and are excluded from using and benefiting from new technologies.

Do we need more diversity and inclusion in AI development?

Clearly, yes. More expertise should be built up on these topics in general – this already starts with training or studies. Topics such as gender, diversity or inclusion are currently not given enough consideration. Researchers are beginning to understand that it is essential to design new technologies for and with people, but the heterogeneity of society must be taken into account to a much greater extent. In our Responsible Robotics project, we are therefore pursuing an “embedded ethics and social science” approach that integrates the analysis of ethical, social, legal and political dimensions of robotics centrally into the research process of new AI applications. Of course, this also includes how robotics can be made positively useful for different people in society.

What exactly is meant by this?

It’s about incorporating different perspectives – both scientific and social – into the development and design process of AI technology. Our concrete example here is the robot GARMI, which is being developed by Professor Sami Haddadin and his team at the Munich Institute of Robotics and Machine Intelligence at TUM and is to be used in care. Together with Professor Alena Buyx and her team at the Institute for History and Ethics of Medicine at TUM, we are researching how the needs of different groups of users – from elderly people to nursing staff – can be incorporated into the development of the robot. Our goal is to develop concrete tools, standards and recommendations for the responsible development and integration of AI technologies into working practice and education in the healthcare sector.

What role does dialogue play here?

In the sense of co-creation, a very decisive role. Research should not remain in its own “bubble”. Co-creation refers to research and development processes undertaken by researchers together with citizens and other stakeholders. The aim is to develop technologies that are tailored to the needs of society. Of course, it is very important to take into account different perspectives from society. And especially for those who are already socially disadvantaged, there are often significant barriers to participating in such processes. Here, concepts are needed to reach these groups and integrate them into innovation processes. In this sense, my research group, together with the team of Professor Iris Eisenberger at the University of Vienna, has recently developed a “Roadmap for Socially Inclusive and Responsible Co-Creation” in Europe, building on the EU project SCALINGS. The roadmap aims to provide researchers with tools to make co-creation more inclusive. If only the perspectives of those who are already privileged are taken into account, co-creation can quickly become a tool for amplifying and legitimising existing inequalities.

Does more knowledge about AI automatically lead to a more positive attitude?

More knowledge about an AI technology in the sense of “how does something work?” does not automatically lead to a high acceptance. Social aspects, economic conditions and political contexts also play a big role here. What are the consequences of using a certain technology? Who benefits from it – individual economic representatives or as many users as possible in the sense of demand-oriented development? Here, too, the issue of social justice plays an important role and should be discussed and integrated more strongly.

Prof. Dr. Ruth Müller

She is Professor of Science and Technology Policy at the Department for Science, Technology & Society of the School for Social Sciences and Technology at the Technical University of Munich. Together with Professor Sami Haddadin and Professor Alena Buyx, she is head of the bidt-funded project “Responsible Robotics.”