The EU's "Artificial Intelligence Regulation” recently came into force. The active use of AI is open to a wider audience than ever before. This article analyses the connection between the use and the desire for stronger regulation of generative AI.
On 1 August 2024, the EU Regulation on Artificial Intelligence (AI Act) came into force, which lays the foundation for the regulation of AI and represents the “first comprehensive set of rules for AI” (Federal Government 2024) worldwide. Initiated back in 2019, the publication of ChatGPT 2022/2023 in particular drew renewed attention to this debated legal framework, as ChatGPT made generative AI accessible to a very broad audience for the first time as a form of general purpose AI. The main reasons for this are the low-threshold access and intuitive operation in the form of a dialogue system that works in natural language. In addition, the integration of generative AI into existing software products or devices such as office programmes, messenger services, internet browsers, smartphones or the Windows operating system means that people now almost inevitably come into contact with it.
However, even though generative AI can do a lot and do many things very well, the use of AI is generally also associated with challenges. among other things, (generative) AI can produce or reproduce errors and inaccuracies, reinforce prejudices or facilitate the spread of disinformation and hate speech. The EU AI Regulation was adopted in May 2023 with the aim of minimising these risks, among others. However, even before its adoption, the regulation provoked different reactions from various interest groups: there is talk of a “clear signpost for the use of AI” (Bündnis 90/Die Grünen 2024). At the same time, the regulation is also described somewhat ambivalently as “the best bad AI regulation in the world” (Zeit 2024) or even as a legal framework that “jeopardises the competitiveness of European companies” (according to the Bitkom managing director on Deutschlandfunk 2024). But what does the population, who have the opportunity to become active users of AI for the first time thanks to ChatGPT and similar generative systems, think about the regulation of AI? And are there differences in the attitudes of users and non-users when it comes to regulatory wishes? To answer these questions, we draw on bidt survey data from July/August 2023 (bidt 2023), which includes a total of 3,020 internet users aged 16 and over in Germany. The results are representative of internet users in Germany and weighted according to age, gender, education and federal state
No clear correlation between desire for stronger regulation and use
Overall, a majority of 52% of respondents are in favour of stronger regulation of generative AI. Of those respondents who have never heard of generative AI, a comparatively large proportion (17%) do not have an opinion at all. However, half of these respondents are also in favour of stronger regulation. This means that this proportion is almost as high as among respondents who have already used generative AI systems several times. Neither the mere awareness of generative AI nor the actual frequency of use alone, which is roughly recorded here, appears to be decisive for attitudes towards the regulation of generative AI.
Knowledge has an influence on the desire for regulation
In order to gain more detailed insights, in a second step the respondents’ knowledge that some results of generative AI can also be factually incorrect – referred to below as “critical knowledge” – was also included. This is because an almost unavoidable phenomenon of generative AI is so-called hallucinations, i.e. the convincing and correctly formulated but objectively incorrect representation of facts. Due to the easier access to many generative AI applications, the prior knowledge of the groups using them varies considerably. They range from people with good prior knowledge to those with little or no knowledge of the “factual accuracy” of generative AI. Combining the use of generative AI with “critical knowledge” about this technology results in the following four groups (see Figure 1):
The groups formed differ significantly in terms of age. It can be seen that the users are on average much younger (non-critical users: 32 years, critical users: 36 years) than the non-users (both groups over 50 years).
The majority of people with “critical knowledge” agree with the statement that the use of generative AI should be more strongly regulated: 55% of critical users of generative AI express a desire for stronger regulation, while this figure is as high as two thirds among critical non-users (see Figure 2). In the group without “critical knowledge”, only 47% of non-users would like to see stronger regulation. The desire for more regulation is least pronounced among non-critical users at 39%. In the latter group, almost a fifth even reject stronger regulation of generative AI. Among users with “critical knowledge”, this figure is only around half as high at 12%.
As a concrete regulatory measure, around 63% of respondents are in favour of obliging providers of generative AI systems to identify the sources of generated content, as is already implemented as standard by some providers, such as the AI of “Brave Search” (heise+ 2024). Across the user groups analysed here, the picture is very similar to the desire for stronger regulation in general. For example, 69% of critical users and even 84% of critical non-users agree with this statement, compared to only 39% of non-critical users and 56% of non-critical non-users.
It can therefore be seen that the desire for regulation is driven more by a critical analysis of and knowledge about generative AI and less by actual use. People who know that generative AI can produce factually incorrect results, regardless of whether they are users or not, are much more in favour of regulation than those who do not know this.
Conclusion
The results indicate that the desire for stronger regulation is not primarily driven by a vague fear of an “AI myth”. It is not those people who are least familiar with and use generative AI who are most in favour of stronger regulation, but rather those respondents with “critical knowledge” about generative AI. One reason for this could be that people with “critical knowledge” have doubts that unregulated AI guarantees the necessary level of transparency, safety and protection of fundamental rights to do justice to the potentially risky effects. Regulatory measures, whether existing or to be adopted, should therefore be communicated to the population together with basic knowledge about this technology and the background explained. Anyone who is unaware of the potential risks of technical possibilities will also have little interest in their regulation, which at best minimises these risks. At the same time, it is also important to keep those who are already well informed about new developments and specific regulatory measures up to date. The aim should not be to fall into a narrative of danger, but rather to increase the fundamental understanding of AI and encourage critical scrutiny.
A more in-depth look at the interplay between usage and specific AI skills will be provided in a forthcoming bidt study.
The blog posts published by bidt reflect the views of the authors; they do not reflect the position of the institute as a whole.