| Topic Monitor | Little Trust in the State when it Comes to AI Regulation

Little Trust in the State when it Comes to AI Regulation

Concerns about AI risks are widespread across party lines and age groups, with low trust in politicians to regulate it appropriately.

The release of ChatGPT, an AI-based chatbot developed by OpenAI, triggered a veritable hype around artificial intelligence (AI) at the end of 2022.
On the one hand, the high quality of the responses generated by ChatGPT to complex prompts reveals the great potential of AI. On the other hand, the rapid development of AI has led to a growing number of critical voices. For example, an open letter from AI researchers and top managers of technology companies caused a stir, calling for a temporary moratorium on the development of certain AI technologies.
A survey by the Centre for AI Risks & Impacts (KIRA) examines how the change in the discourse on AI is affecting public opinion on AI and its regulation. To this end, 2,500 people aged 18 and over were asked about their attitudes towards AI in April 2023.

Initially, no clear picture emerged with regard to general attitudes towards AI. For example, 29.3% are somewhat or definitely in favour of the development of AI, 29.9% have a neutral attitude and 37.5% are somewhat or definitely against the development of new AI technologies. However, the slight tendency towards a negative attitude is particularly evident at the margins. Only 9.6% of respondents are clearly in favour of the development of AI, while 17.6% are clearly against it.

With regard to the expected future influence of AI, the responses of those surveyed were much more negative overall. Although 38.5% of respondents expect both positive and negative effects of AI on the world within the next 10 years. However, less than 3.0% of respondents believe that AI will have a very positive impact overall, while 22.1% expect a very negative impact.
When asked about the most worrying possible effects of AI, 58.4% of respondents cited the abstract danger of a super AI that could become a threat to humanity. 56.7% are concerned about AI influencing public discourse and 54.0% cite the risk of widespread surveillance by AI as particularly worrying. It is worth noting that there are no significant differences in the assessment of AI and its risks across both age and party boundaries.

Respondents do not trust the state to adequately regulate AI and control risks 72.4% have little or no trust in the state, while only 6.3% have some or great trust.
Across society, people in Germany see major risks from AI and doubt the state’s ability to regulate it appropriately.