| News | Interview | Do we need more artificial intelligence oriented towards the common good?

Do we need more artificial intelligence oriented towards the common good?

In the spirit of "research in dialogue", we brought together two experts for an interview who can tell us more about the use of AI and algorithms, knowledge building in the population and the opportunities of AI for the common good - a look beyond the institutional horizon Julia Gundlach, co-leader of the “reframe[Tech] – Algorithms for the Common Good” project at the Bertelsmann Stiftung, and Dr. Roland A. Stürz, head of the bidt think tank and co-author of the bidt SZ digital barometer.

© kamiphotos / stock.adobe.com

Without concrete data on knowledge about artificial intelligence (AI) and algorithms, one can only make assumptions about the question: Where do we stand in Germany? Bidt and the Bertelsmann Foundation have conducted two independent surveys on this. While data on digital transformation topics- such as the population’s and AI skills – were collected for the bidt-SZ-Digitalbarometer published in 2022, the Bertelsmann Foundation studies published in 2018 and 2022 focused exclusively on AI and algorithms.

Mrs Gundlach, Dr. Stürz, where is AI already used as a cross-sectional technology?

Roland A. Stürz: A classic example is the autonomous driving assistant in cars. Another example is chatbots used on websites to improve customer dialogue. But companies are also increasingly using AI in many other areas, such as logistics or to increase cyber security.

Julia Gundlach: Indeed, these business-related examples come to mind when we hear AI. This also coincides with our observations at the Bertelsmann Stiftung that AI and algorithms are largely developed and used out of economic interests and efficiency efforts. There are fewer examples of public good-oriented AI in practice.

What do you mean by public good-oriented AI?

Gundlach: What the term common good means has to be continuously negotiated socially, so it depends on place and time. Generally speaking, we understand AI to be applications that address societal challenges and pursue more than just profit; for example, when AI contributes to reducing discrimination and disadvantages of certain population groups or allows more people to participate in life in the digital society. For recent research together with Betterplace Lab on the use of public good algorithms, we intentionally narrowed down the definition more. We only looked for examples developed and used by civil society or public actors.

What would be concrete examples of AI serving the common good?

Stürz: I would not narrow down public good-oriented AI quite so much. From my point of view, AI used in the medical field, for example, in diagnostic procedures for cancer detection, is also for the common good. However, private companies with economic interests are often behind it. The same applies, for example, to the reduction of empty runs in logistics or the economical use of fertilisers in agriculture – efficiency and profit striving to go hand in hand with the common good here by saving scarce resources.

Gundlach: With the examples, I think we have to pay attention to what feedback effects arise and what happens with the resources saved: Will the treatment of patients be “better”, or will the use of technology replace hospital staff? One cannot evaluate the question of public good orientation only based on technology but must always look at the social embedding. We kept the definition narrow for one of our current research because we observed that most AI applications come from the business sector. So we asked ourselves why this is and whether there are not also public good-oriented applications from civil society and the public sector. We found very few concrete examples, but one of my favourite examples is the algorithm-based allocation of daycare places, which has been used in Steinfurt since 2017. Another example is the chatbot Ina of the Schleswig-Holstein Integration Office, which can answer in easy-to-understand language – this makes it much easier for the many people with reading difficulties to access information and participate.

What is the general level of knowledge about AI among the population?

Gundlach: As part of our studies in 2018 and 2022, we very explicitly asked people about their knowledge of AI and algorithms as part of a self-assessment. We found that knowledge has increased over the past few years: While in 2018, 41 per cent of respondents still attributed themselves to fairly accurate or approximate knowledge about algorithms, in 2022, it was already 60 per cent. But especially in the group with a low level of formal education and among people over 60, there continues to be a high level of ignorance.

Stürz: We came to very similar results in the bidt-SZ-Digitalbarometer published in 2022, both about the survey of digital competencies in general and knowledge about AI. We also see clear differences between socio-demographic groups: Low-educated, lower-income, older people assess their digital competencies, but also their understanding of AI, significantly worse than formally more highly educated, higher-income and younger people. The knowledge gap in the population is widening here, so there is a digital divide.

Why is it necessary to build up AI knowledge in society at all?

Gundlach: There must be a basic understanding among the population about the effects of the use of algorithms and AI – because they are penetrating more and more areas of life. Not everyone needs to know how to programme. But you should know where you interact with algorithms and AI everywhere and what effects that can have. For our project “reframe[Tech]” at the Bertelsmann Stiftung, people from politics and public administration, in particular, must build up skills.

Why are you specifically addressing these actors?

Gundlach: Technology development must be more strongly oriented towards the common good. This also means that people affected by a technology’s impact should participate in its development. In particular, actors in key positions need to understand the impact and influence of algorithms and AI and, in this context, the scope of their decisions. In the future, the public sector will use more and more AI applications and build control instances, which are required in the new European digital laws. Our project starts with the question: What competencies are needed for this? It is easy to demand more and more knowledge development. But you also have to say: What do you mean by that in concrete terms, and how do you build these competencies?

Is there a backlog of demand on the part of politics to create suitable offers?

Stürz: If we look specifically at the area of continuing education in Germany – and this is very important, especially concerning lifelong learning: In Germany, the continuing education market for adults is relatively complex, as are the associated funding structures. This scrub should be cleared on the part of politics to make the market more transparent. For example, a quality framework with minimum standards in Austria has existed in adult education with Ö-Cert. This has contributed to a sustainable professionalisation of continuing education and is considered a showcase project. This could also be a best-practice example for Germany – of course, also with regard to the development of digital competencies and knowledge about AI and algorithms in adult education.

Gundlach: Besides building more competencies, I don’t think it should be underestimated how important positive visions of the future are: What can we use AI, algorithms and digital tools for in concrete terms? Here, politics should show a definite idea of where the journey should go. Unfortunately, this did not happen in the federal government’s digital strategy. In addition, it is also necessary to create more spaces and occasions to talk about AI and its impact on our society.

Stürz: I find the demand very interesting to create more discussion spaces for AI or digital transformation in general – and that also means focusing more on aspects around non-profit AI. That would certainly advance the social debate and help people form an opinion on AI in the first place. Many people are still quite indifferent here.

In what way?

Gundlach: In our survey, we asked about the opportunities and risks of AI and algorithms – very much like you did with the Digitalbarometer. More than 40 per cent did not express a clear opinion as to whether the opportunities or the risks outweigh the risks. That is very unusual in such surveys. And among those who could decide, the risks outweighed the opportunities.

Stürz: We also came to very similar results in the Digitalbarometer. In addition, we were able to see that higher digital skills go hand in hand with more knowledge about AI. And more knowledge leads to a stronger emphasis on possible opportunities instead of risks. By the way, the respondents perceive medical AI applications in detecting and treating diseases very positively. But for many other areas, risks were seen: in driving vehicles, caring for older people or judging court cases.

Gundlach: It’s like this: where little social impact is expected, the computer is more likely to “be able” to decide on its own. The fact that we are shown personalised advertising based on algorithms is widely accepted. The same goes for programmes to check spelling or sentence structure – here, and people are more likely to be grateful for the support. But in other cases, the computer should not be left alone or even be involved in decision-making, for example, in assessing the recidivism of criminals or planning police operations. Interestingly, there is a relatively high acceptance of facial recognition in public spaces with the help of AI.

What role does media coverage play here? The Bertelsmann Foundation has researched this.

Gundlach: It is exciting to see which topics the leading and specialised media focus on when reporting on algorithms and AI. We were able to determine that since 2005, the share of socio-political issues in the discourse has strongly decreased while technical and economic topics have increased. It is also interesting who is quoted and mentioned in the articles: Representatives from politics, administration and civil society appear much less frequently than those from business. There is thus a lack of diversity of perspectives in the media coverage. To better reflect the entire breadth of the discourse on AI and algorithms, actors from politics and civil society should be given more consideration.

Is there a concrete need for action?

Stürz: Of course, there is a need for action! We need to promote the formation of public opinion on digitalisation topics and AI through targeted knowledge building among the population, more balanced media reporting and the creation of knowledge spaces. What must not happen is that the knowledge gap widens further – in other words: We must prevent older people in particular, but also low-income or formally low-educated people, from being further left behind.

Gundlach: That must not happen under any circumstances with a socially relevant topic like AI. Research on this is incredibly important to create reliable knowledge for recommendations for action through facts and figures. Even if you have chosen a broader approach to digital competencies in the Digitalbarometer so far and we concentrate more on AI and algorithms, the following applies to both studies: More knowledge about AI and algorithms and more digital skills enable those affected to deal better with the technologies. And that is the basis for technology development to be more strongly oriented towards the common good – and it must be.

Thank you very much for the interview!

The interview was conducted by Nadine Hildebrandt.

Julia Gundlach

Julia Gundlach is co-leader of the project “reframe[Tech]” in the programme “Digitisation and the Common Good” at the Bertelsmann Stiftung and is responsible for the work on the public good-oriented use of algorithmic systems. She is intensively involved with use cases in which algorithms are used to address social problems. Previously, she was responsible for international networking in the Digital Technologies Forum project at the German Research Center for Artificial Intelligence (DFKI). During her Masters in Public Policy at the Hertie School in Berlin, she completed a professional year in the economic policy department of the Federal Ministry of Economics.

Dr. Roland A. Stürz

He is head of the Think Tank department at bidt. Before joining bidt, he worked as a research assistant at the Max Planck Institute for Innovation and Competition in Munich. Before that, he worked as a research assistant at the Institute for Innovation Research, Technology Management and Entrepreneurship at Ludwig Maximilian University in Munich. Roland A. Stürz studied business administration at LMU Munich and Copenhagen Business School. He holds a degree in business administration, a Master of Business Research and a doctorate in business administration. He regularly teaches innovation policy courses at the Munich Intellectual Property Law Center. His research interests lie in the areas of innovation policy, digital transformation and industrial evolution.