| News | In the media | Fay Carathanassis on BR24 about the legal options for combating disinformation

Fay Carathanassis on BR24 about the legal options for combating disinformation

Disinformation on the internet is the topic of the BR24 #Faktenfuchs article ‘AI-generated and controlled: Prorussian campaigns before the election’ from 19 February 2025. In it, Fay Carathanassis, researcher at bidt, explains the legal situation in Germany and Europe and when such content can be deleted or prosecuted.

Laptop showing different fake news which make it seem like the world is ending
© stock.adobe.com / lembergvector

The spread of disinformation on the internet, particularly through AI-generated content, is an increasing challenge. Especially in the course of the German elections, Russian campaigns have spread AI-generated propaganda and disinformation on social networks on a massive scale. The authors often hide in an opaque network, which makes it difficult to hold them accountable and remove the content permanently.

In the #Faktenfuchs article “AI-generated and controlled: Prorussian campaigns ahead of the election” from 19 February 2025 on BR24, Fay Carathanassis, Research Fellow at bidt, explains the legal options for combating disinformation that apply in Germany and Europe. In principle, false claims and lies of a general nature are not punishable in Germany.

There are two main bodies of law that criminalise certain types of disinformation: the German Criminal Code and the EU Digital Services Act (DSA). The Criminal Code primarily concerns “factual allegations about persons that are made to the person or to third parties”, explains Carathanassis.

In the EU, the Digital Services Act (DSA) imposes obligations on very large online platforms such as X, Facebook and Instagram, among others. The platforms must enable users to report content that they consider to be unlawful. What is considered unlawful is what is prohibited by national and European law, for example offences such as defamation or incitement to hatred.

Experts are discussing whether the platforms must delete such content if they come to the conclusion that it is illegal after it has been reported.

Fay Carathanassis To the profile

However, as disinformation was not defined in the DSA, the DSA obliges platforms to identify and assess “systemic risks”. Such a risk would be, for example, content that has a negative impact on social debate and electoral processes.

If a platform comes to the conclusion that there is a systemic risk, it must take measures to “mitigate” the risk: for example, changing the algorithm, moderating content differently or labelling false information.