Project description
Nowadays, with the rise of generative artificial intelligence (AI), even non-experts can generate synthetic media quickly and relatively easily. This is true particularly for deepfakes, which tend to look increasingly realistic and can hence be used for systematic deception. This development bears unimagined potential for law enforcement as undercover investigations might benefit from voice, speech and video clones.
However, the use of deepfakes raises various technical, legal, and ethical questions: What are the technical possibilities? What is legally permitted? And what are the ethical and social consequences of using deepfakes in criminal prosecution? We aim to shed light on these questions from an interdisciplinary perspective.
We develop specific recommendations for investigators and society on how to deal with deepfakes in criminal prosecution and beyond; by developing an interactive demonstration tool, we bring these recommendations into practice. Ultimately, our research aims to answer the question under what circumstances and to what extent the use of deepfakes is socially, morally, and legally acceptable.
Project team
Prof. Dr. Lena Kästner
Professor for Philosophy, Computer Science and Artificial Intelligence, University of Bayreuth
Prof. Dr. Niklas Kühl
Professor of Information Systems and Human-Centric Artificial Intelligence, University of Bayreuth
Prof. Dr. Christian Rückert
Professor of Criminal Law, Criminal Procedure Law and IT Criminal Law, University of Bayreuth
OStA’in Miriam Margerie
Zentral- und Ansprechstelle Cybercrime Nordrhein-Westfalen (ZAC NRW)