| News | Interview | “I would never walk across a bridge that an AI system built on its own”

“I would never walk across a bridge that an AI system built on its own”

A conversation with Professor Ute Schmid about the success factors of artificial intelligence (AI) and the limits of its implementation and why it is worth looking into the black box of AI - from medicine to art.

© kamiphotos / stock.adobe.com

The computer programme DALL-E 2 can create images from text descriptions based on machine learning – can an algorithm be artistically creative?

Ute Schmid: Basically, we would first need a definition of what is meant by artistic-creative. It is similar with the question of whether it would be possible for AI systems to develop consciousness. In my opinion, consciousness is a prerequisite for creative work – for example, the knowledge of one’s own identity, the inner experience of perceptions and the ability to evaluate one’s own activities.

The neural network DALL-E 2 developed by OpenAI, for example, will not get angry if a painting is not so well done. While an artist like van Gogh senses and lets the inner sensation – so-called qualia – become pictures, a programme like DALL-E 2 uses existing data and combines them. But I have to admit that DALL-E 2 often manages to create very convincing and sometimes even amazing images from text descriptions. I don’t want to presume to define what is art and what is not. What AI systems can certainly do is to generate images, melodies or texts based on frequently existing simple patterns. Typical pop music or an idyllic mountain scene can be generated as a variation of the familiar.

Where are the limits of AI? Should a machine be able to take responsibility at some point?

Machines are used in many safety-critical areas such as power plants or air traffic. By and large, this works very well and most people trust that such technologies are safe. In these applications, it is not the machines that are in charge. Many people share the responsibility here at all stages – from the developer to the quality inspector to the pilot and the air traffic controllers. This kind of approach to the development, quality control and application of complex technologies, which always contain software components, can be transferred to AI-based systems.

Unlike standard software, however, AI systems cannot be guaranteed to behave correctly. When it comes to models built with machine learning from large amounts of data, the representativeness and quality of the data has a great influence on how reliable and robust the AI system will be in practice.

The use of complex systems – whether with or without AI components, takes place in a complex socio-technical structure. This means that it is not the technology alone, but its appropriate embedding in an application context that determines how secure we can be. In the vast majority of applications, it makes sense for humans and machines to interact:

The machine helps us to master complexity, but ultimately the responsibility lies with the human being.

Prof. Dr. Ute Schmid To the profile

This also concerns the question of who is ultimately to blame if something goes wrong – the human or the machine. I would never walk across a bridge that has been calculated and built exclusively by AI. After all, there must still be experts who are able to check the calculations. This also means that with the ever-increasing use of AI systems, we have to make sure that we expand our competences, but at the same time we must not give up and unlearn competences.

AI can already relieve humans of many tasks. One question that remains: should it necessarily do so?

Basically, we should consider which areas – even if they could be taken over by AI systems – we want to leave to humans.

Human attention should not be replaced by AI – be it the relationship between patient and nurse or between teacher and learner. Caregivers can be supported in their documentation duties by AI systems, for example, or relieved of physically strenuous tasks by intelligent robots.

The researchers in the bidt project Responsible Robotics are investigating how robots can be used sensibly in care. Another example from the medical field is the use of AI systems in the diagnosis of skin cancer. The use of AI has the advantage that more patients can be examined in shorter periods of time. But here too:

AI models created with machine learning are only ever as good as the data set they were trained on.

Prof. Dr. Ute Schmid To the profile

In the main cancer screening, the models were found to make significantly more errors in people with darker skin colour, as they were underrepresented in the training data. Again, AI-based technologies can support people in their work. However, the decision about a diagnosis should remain with the human experts.

Is it already enough that we can use AI in practice or is systematic knowledge building needed?

Basically, AI systems will increasingly be part of our everyday life and accordingly all people should have a certain basic understanding of what AI systems are, what they can do and what they cannot do.

That doesn’t mean that everyone has to become an AI expert, but we should all be able to interact with AI systems in a reflective way. Even though there are now numerous government and private initiatives, in my opinion a broad education offensive on digital literacy, data literacy and AI literacy is necessary. Only if we fundamentally understand why a certain product is recommended to us, why we receive certain news or why we are not allowed to pay by invoice at an online shop, can we use AI technologies confidently and self-determinedly.

For most professions – academic as well as non-academic – the relevant AI competences in this field belong in the curricula. For example, more and more applications in medicine are based on machine learning. To understand how such AI models arrive at certain diagnostic suggestions, data literacy skills in particular are important – i.e. skills in data collection and statistics. Imagine a future in which medical professionals blindly rely on the suggestions of an AI system and are unable to assess uncertainties of the system. By the way, the bidt project is also concerned with the appropriate trust in AI systems when making medical decisions:

The use of AI is often accompanied by ethical concerns. To what extent is it important to accompany these technical developments scientifically?

In my opinion, technical developments with major social consequences – be it nuclear power, genetic engineering or AI – should always be accompanied scientifically – and in an interdisciplinary way.

Likewise, a broad social discourse is indispensable in such areas. In order for this to take place in a meaningful way, it is necessary that the relevant knowledge is conveyed as value-free as possible. Let’s return to the use of AI in medicine: Here, the perspectives of physicians, ethicists, computer scientists, but also nursing scientists and patient representatives should be taken into account.

It is precisely this kind of dialogue between research and different stakeholders in society that we at bidt would like to promote. It is important to involve those who will later have to work with the new technologies. AI technologies should be developed in a participatory manner. A good example is the use of the care robot GARMI – the Responsible Robotics project team actively exchanges ideas with various stakeholders such as trainees, students and teachers from different educational institutions for care. Based on these conversations, the project team is exploring how new AI technologies and existing concepts of work and life fit together, what knowledge caregivers should acquire and what expectations and concerns future caregivers and patients have.

Better together: Do humans and machines make a good team? Where is the joint journey going and what is important?

Humans and machines can be more productive together than alone – if certain conditions are met. Essential here is the design of the interaction interface. A classic example: Today, for the vast majority of customers, buying a train ticket from a ticket vending machine is not much of a challenge. When the vending machines were first introduced, things were different: on the one hand, the purchase dialogue was not designed to be very intuitive, and on the other hand, customers had hardly any experience with purchasing goods by interacting with a user interface. In the meantime, there is a lot of knowledge on how to design such dialogues in a meaningful way and the users have routine in dealing with such systems.

The situation will be similar when dealing with AI-based technologies. The more complex the application, the more important it becomes that the output of an AI system is comprehensible and correctable. An AI system that assists in image-based medical diagnosis, for example of colorectal cancer, should not only output a tumour class, but also, on request, an explanation of the information on the basis of which it arrived at this decision. This traceability by the human expert is an important prerequisite for control. In addition, a veto should always be possible. The medical expert should be able to correct the system’s diagnosis proposal. It is precisely the ability to correct that is important to prevent blind trust from developing in AI systems.

Transparency and correctability are also necessary in everyday applications so that people retain their sovereignty – whether in smart home control or in the recommendation of a health app.

Thank you very much for the interview!

The interview was conducted by Nadine Hildebrandt, scientific officer in the bidt dialogue team.

Ute Schmid

Ute Schmid is a member of bidt’s Board of Directors and Executive Committee. She is Professor of Cognitive Systems at the University of Bamberg and has been teaching and researching in the field of artificial intelligence for many years, focusing on human-like machine learning and methods for interactive and explanatory learning. She is the initiator and head of the Elementary Informatics Research Group (FELI) and has been committed to teaching basic informatics concepts in primary schools for over ten years. In 2020, she was awarded the Rainer Markgraf Prize for her commitment to knowledge transfer in the field of computer science, especially on the topic of AI, for children, teachers as well as the general public. Ute Schmid is a member of the Bavarian AI Council.