Alexander Pretschner is a sought-after interlocutor. And even if it is not his style to problematise or to urge haste, his counterpart quickly becomes aware of a certain urgency. This is due to his research and field of activity: Alexander Pretschner is committed to shaping digitalisation.
The computer scientist heads the Chair of Software and Systems Engineering at TUM and initiated the Bavarian Research Institute for Digital Transformation (bidt), where he is Chairman of the Board of Directors. At the same time, he is the scientific director of the fortiss institute, which has been developing technical research and technology solutions for ten years, for example, on artificial intelligence.
“The bidt’s mission is to understand the digital transformation and enable its shaping in society,” says Alexander Pretschner. Given the acceleration of digitalisation and the changes it means for society, the task’s urgency becomes seemingly obvious.
The question of responsibility
The bidt is deliberately interdisciplinary. The institute promotes research projects that scientists at different university locations in Bavaria jointly implement. And research on the consequences of digitalisation is also being conducted at the institute. This includes the “Ethics in Agile Software Development” project, which Alexander Pretschner is leading with philosopher Prof. Dr. Julian Nida-Rümelin.
Until now, ethical considerations have not played a major role in technical subjects.
Prof. Dr. Alexander Pretschner To the profile
Whether what one does as a software developer is useful for society or whether one’s development can cause harm – such questions have not been thought about systematically. “It will happen more and more often in the future that large software systems are built, and the decisions these systems make potentially influence many people. So the obvious thought is to hold those who build the systems accountable to consider whether what they do could have negative consequences.”
As a negative example, Alexander Pretschner cites software systems that produce undesirable results because, for example, there are racial or religious prejudices in the underlying data.
He needs to emphasise the following:
The decisive factor is who takes responsibility for technological developments.
Prof. Dr. Alexander Pretschner To the profile
“Ultimately, a human being must always decide whether something is done or not. I think it is necessary to be aware of this fact and sensitise students. Of course, we as engineers have a responsibility – but not only we, by the way. Company managements are also called upon, operators and users of such systems – and ultimately society, i.e. civil society, politics and business.”
In this context, Alexander Pretschner attaches great importance to not restricting innovation too much: “But if this leads us to exclude too many new possibilities from the outset and prevent developments because they could potentially be harmful, then we are throwing out the baby with the bathwater. The challenge, of course, is to arrive at an appropriate balance on a case-by-case basis in the sense of a European way.”
The potential of new technologies
Alexander Pretschner repeatedly takes on the role of enlightener when it comes to understanding the technological aspects and the associated challenges of digitalisation.
“At the moment, arguments are not particularly fact-based. For example, the discussion about the jobs that robots will supposedly take away in the future seems a bit hysterical to me.” Alexander Pretschner also considers the way the debate about artificial intelligence, or AI for short, is currently being conducted to be “rather exaggerated”, as he soberly puts it.
Alexander Pretschner quickly brings his interviewees back down to earth, regardless of whether they develop horror scenarios – such as artificial intelligence threatening to take over the world one day – or have too high expectations of new technologies. Alexander Pretschner has often been asked about artificial intelligence in recent months.
One of his main areas of research is testing. This involves comparing the actual behaviour of a system with its desired “target behaviour”, as it is called in technical jargon. “This turns out to be very difficult in the field of data-based AI because normally there is no clear specification of the target behaviour,” says Alexander Pretschner. And he never tires of explaining that AI, about which so many headlines are produced, is only one technology among many.
My field is software engineering, and we have a toolbox of different tools at our disposal – AI is just one of them, albeit a very powerful one.
Prof. Dr. Alexander Pretschner To the profile
As a computer scientist, he is naturally also interested in the technical challenges associated with AI.
“Besides the societal and political level, there are many difficult questions with AI from a technical perspective that we need to answer. One of the biggest problems is that our data is often of poor quality and, therefore, often cannot be used directly.
The second problem with data-based AI results from the fact that we try not to write down relationships explicitly but instead let the systems learn based on examples. This is done because the direct correlations are sometimes too complicated – the data-based approaches represent a good approximation. But it turns out that this doesn’t always work well because many relationships are already very well understood, such as gravity – to name just one example. It is not helpful or necessary to let a system learn such already-known correlations again in a data-based way. The technical question then is how to combine explicit knowledge in the form of equations with data-based non-explicit rules.”
Informed decision-making
Alexander Pretschner is one of those scientists who are not afraid to discuss the challenges of their discipline with people from outside the field. This is evident in the interdisciplinary exchange he seeks in his research and his willingness to engage in dialogue with the public.
In addition to the lack of fact-based knowledge that he notes among some concerning digitalisation, he also perceives a great deal of uncertainty that is often associated with fears.
There is a feeling in society that digital transformation is just happening.
Prof. Dr. Alexander Pretschner To the profile
“Technological developments can change a lot. I see incredible potential in them, but of course, they can also have negative effects. But I don’t believe that one should always only see the problems. Thinking first and foremost about what can go wrong is a very European approach. Because even if difficulties show up, we are sufficiently strong as a society to deal with them again.”
On a societal level, Alexander Pretschner sees participation as a major challenge to ensure everyone’s social participation through digital technologies. “Of course, this refers not only to the technology but also to the content that is made available. How do people learn to deal with data? How to deal with misinformation? How do they learn to develop judgement?”
With his commitment, also through bidt, the computer scientist strengthens the judgement of people who are not (yet) technical experts: “That is close to my heart as a professor.”
At the Technical University of Munich, he is organising a lecture series on digitalisation, covering aspects as diverse as digital education, blockchain and management issues. Ultimately, the aim is to secure society’s ability to act when it is difficult to keep track of things given the acceleration with which digitalisation is progressing. “Technologies are neither good nor evil,” says Alexander Pretschner. “It’s up to us humans how we deal with them.”