| News | Blog | Between reconnaissance and manipulation: deepfakes in law enforcement
Deepfakes in der Strafverfolgung

Between reconnaissance and manipulation: deepfakes in law enforcement

Generative AI is increasingly being used for criminal purposes, but it also offers great potential for law enforcement. This blog post not only explains what deepfakes are and how they are used for criminal purposes, but also highlights the opportunities and risks of using generative AI in law enforcement.


Generative AI opens up completely new possibilities in many areas of life, including law enforcement. One of these possibilities is the use of deepfakes to infiltrate criminal organisations. But would such an application be technically possible, ethically justifiable and legally realisable? And how would legitimising deepfakes in the context of criminal prosecution affect our dealings with and trust in the media elsewhere? The project “For the Greater Good – Deepfakes in Law Enforcement (FoGG)” deals with these and related questions.

1. What exactly are deepfakes?

The word deepfake is a combination of “deep learning” and “fake”. Deep learning is a sub-discipline of machine learning based on artificial neural networks. The discipline deals with adaptive computer systems that continuously improve through experience. With the help of this deep learning technology, images and videos can be created that are almost indistinguishable from reality, if at all. In this way, existing media can be realistically manipulated or completely recreated.

2. From grandchild fraud to federal politics..

Deepfakes are being used more and more frequently for criminal offences. The technology opens up completely new possibilities for fraudsters in particular. One common approach is to perfect the so-called “grandchild trick” using deepfakes. This involves using fake photos, voice recordings or videos to convince private individuals that a close relative urgently needs their financial help. Theoretically, this scam can now be used so convincingly that it is not only gullible people who fall for it. One particularly worrying trend is known as “CEO fraud”: fraudsters attack companies by digitally impersonating the CEO or CFO, causing the company’s employees to make payments to certain accounts. This can cost companies millions in losses. Deepfakes can also permanently change the political landscape. For example, a deepfake by Olaf Scholz at the end of 2023, in which he allegedly called for the AfD to be banned, made the rounds. We are therefore facing new challenges in society, the scale of which cannot yet be assessed.

3. Potential for law enforcement

But the new technology is not only attractive to criminals. Law enforcement also sees immense opportunities in it. The hope is that deepfakes could partially replace undercover investigators, whose deployment is lengthy, expensive and, above all, extremely dangerous for the investigating officers. All of these disadvantages are eliminated if law enforcement can digitally clone a person from a criminal network and pose as that person. In this context, a clone is a digital replica of a person that can be used to imitate the person’s voice and likeness in real time. Such a clone can enable law enforcement officers to conduct conversations as the cloned person and thus obtain the necessary information, such as crime plans or organisational structures. It is not yet clear whether such a procedure is currently legally possible in Germany. In any case, there is no standard that expressly authorises law enforcement to clone people digitally.

4. Does the end justify the means?

In addition to the opportunities mentioned, there are also a number of serious risks. First and foremost, there are dangers for the cloned person. The use of digital clones by law enforcement is – to put it bluntly – state-legitimised identity theft. The cloned person can easily gain the reputation of being a traitor within their organisation and this can have corresponding consequences. Investigators can therefore potentially put the cloned person’s life in danger.

But it is not only such risks to life and limb that need to be considered when using deepfakes. Our voice and our likeness are important aspects of our identity. Imitating these characteristics is therefore a serious deception. If these characteristics are exploited and another person discloses information in reliance on the identity of the other person, the right to informational self-determination is violated. Covert use exploits the deceived person’s trust in the other person’s voice or image. Such an approach can undermine trust in digital communication in the long term. Anyone who has been deceived in this way is very likely to wonder in future whether the person they are talking to (via video) is actually who they say they are.

5. Loss of trust?

The easy availability of ever better generative AI will mean that the way we deal with digital media in general will have to change. We are increasingly confronted with false images, videos or soundtracks, and not just in social media. As a result, we can no longer trust such recordings in the same way as before the advent of technology. Even if this means that law enforcement would not be solely responsible for a loss of trust, we must ask what role government agencies can and should play in these processes. By using deepfakes, law enforcement could not only accelerate the loss of trust in digital media, but also undermine trust in the state. The effects of a possible double standard are also unforeseeable if the state authorises the use of deepfakes to deceive itself but prohibits it for citizens. We must weigh up these risks against the opportunities for criminal prosecution and the investigation of serious crimes. Only if the opportunities outweigh the risks can we consider more precisely in which situations and to what extent law enforcement should be allowed to use clones.

6. FoGG

The aim of the project “For the Greater Good – Deepfakes in Law Enforcement (FoGG)” is to research these effects of deepfakes and to develop concrete guidelines for the responsible use of deepfakes. We are investigating the technical possibilities and legal requirements for the use of cloning, embedded in a societal view of the risks and potential of deepfakes.

The blog posts published by the bidt reflect the views of the authors; they do not reflect the position of the Institute as a whole.