Not only since the COVID 19 pandemic has the fight against disinformation on the internet been high on the digital policy agenda of the European Union. With the Digital Services Act (DSA), a law has been passed to address this problem, among others. But is the DSA a suitable instrument for this purpose?
To advance the fight against hate speech and disinformation, the European Union has launched the DSA as part of a legislative package and the Digital Markets Act (DMA). The DSA will come into force with more of its provisions on 17 February 2024. The regulation is intended to adapt the legal situation and the regulatory instruments to the realities of the Internet. The E-Commerce Directive, the predecessor law of the DSA, dates back to 2000 and, like its US model, the Communication Decency Act (1996), was created when the Internet was still in its infancy. Therefore, the primary regulatory goal was to set a framework for a developing technology without hindering this development process. This was done primarily in the course of liability privileges, which made the development of the Internet as we know it possible in the first place.
In the meantime, these liability privileges have become obsolete – especially against the background of the rapid spread of disinformation on the net. At the same time, an emerging legal fragmentation in the internal market should also be prevented in the fight against hate speech and disinformation – Germany itself has taken a pioneering role here with the Network Enforcement Act (NetzDG). This is also about how much responsibility the operators of large online platforms should bear in combating such content. Finally, of course, hate speech and disinformation are not new phenomena. Still, they have been significantly amplified by the Internet and have become a central issue not only since the pandemic.
Against this background, we examine in this blog post whether the DSA can also fulfil the goal of combating disinformation or whether there is room for improvement.
Challenges posed by disinformation on digital communication platforms
Known to the general public as “fake news“, the term disinformation can be defined as the “deliberate production of pseudo-journalistic false information”. The term “fake news” is also used as a label to delegitimise established news media. If the term is used as a genre and not as a label, disinformation is news reports that are intentionally misleading and are produced and disseminated to influence recipients. Thus, the term disinformation should not be confused with the term misinformation – the latter merely describes unintentional errors in the information disseminated that are not intended to deceive intentionally.
The phenomenon of disinformation encompasses audio, visual or audio-visual content and can be disseminated through mass media and interpersonal communication. Although the spread of disinformation is nothing new, its importance has grown enormously with the advent of digital technologies. Particularly in the US election campaign in 2016 and Brexit in 2020, the role of social networks in the increasing reach of disinformation campaigns has been critically discussed in academic circles and among the general public.
The rapid spread of disinformation on social media and communication platforms, in general, poses major challenges for a democratic society. Against the background that media play a central role in opinion-forming processes, the spread of disinformation is particularly problematic and dangerous for public opinion-forming, especially in times of election campaigns or crises. Thus, disinformation can, among other things, endanger the legitimacy of political crisis decisions and lead to insecurity and mistrust of political institutions, as was shown in the course of the Corona crisis. The danger of misinformation is particularly significant, especially if users only obtain their information online on digital communication platforms and otherwise have little contact with other news sources.
In addition, users often have difficulty recognising disinformation because it is usually written in journalistic articles and, at first glance, hardly differs from editorial texts in quality media. However, studies show that fact-checkers or other resources are used to authenticate information conveyed by the media when recipients are unsure how much credibility they should give to their sources.
To limit and – if possible – combat the spread of disinformation, social media and search engines like Facebook and Google use automated detection technologies.
Disinformation – illegal content?
The DSA does not explicitly address disinformation but ties in with so-called “illegal content” in particular and provides for several instruments to prevent the spread of such content. Central to this is the so-called “notice-and-action” mechanism in Art. 16 DSA. According to this standard, the operators of online platforms must provide a user-friendly tool with which users can report illegal content to the platform. If the platform fails to act, it loses its liability privileges and is liable for the content, even though it originated from a user. But illegal content is also central for ordering national authorities to remove certain content (Art. 9). However, the DSA does not define what illegal content means. Instead, it leaves the definition of this term to the individual member states. In principle, this leads to a broad scope of application of the DSA – broader than that of the German NetzDG, which only covers certain offences. However, there is no precise differentiation between different types of illegal content and so-called “Awful but Lawful Content”, i.e. content that can be considered harmful but can still be classified as legal content is also not covered.
Pure disinformation, which – at least according to German law – does not constitute illegal content, is also not covered. As untrue statements of fact, they are not protected by freedom of expression, but at the same time, they also benefit from the “legality of lies”. Only when disinformation affects other legal interests and, for example, interferes with the personal rights of other persons is it illegal content (for example, defamation as defined in section 187 of the Criminal Code). Although this decision by the legislator may negatively affect democratic discourse, it must nevertheless be seen as fundamentally correct – because the state should not be able to decide which statements are worthy of dissemination and which are not. Instead, the democratic constitutional state must rely on the power of free debate, even if it cannot fully guarantee this prerequisite regarding restricting disinformation.
Even if the supposedly most important instruments of the DSA do not apply to the spread of disinformation, it cannot be ruled out that the DSA nevertheless contributes to the fight against the spread of disinformation on online communication platforms.
Central to this is, above all, a self-regulatory approach pursued by the DSA. It is up to the platforms to decide which content they want to allow and which they do not. This also applies to disinformation because internal rules and regulations of the platforms decisively determine – as is already the status quo – when and under what circumstances disinformation is permissible. At the same time, however, the DSA strengthens transparency obligations so that service providers must, among other things, disclose extensive information about their content moderation activities (Art. 15 DSA). This is intended to strengthen the traceability of private decisions and prevent arbitrary moderation. However, whether this approach of transparent self-regulation will prove successful remains to be seen and must be subjected to empirical testing, so it remains questionable whether the DSA effectively counters the spread of disinformation.
The blog posts published by bidt reflect the views of the authors; they do not reflect the position of the institute as a whole.