| Research Projects | Promoted | Authoritarian AI: How Large Language Models (LLMs) Align With Russia’s Propaganda
bidt background

Authoritarian AI: How Large Language Models (LLMs) Align With Russia’s Propaganda

This project raises two primary research questions: (1) how, and with what consequences, are Large Language Models (LLMs) developed under strict oversight and censorship in contemporary Russia, and (2) what impact do authoritarian data (tainted by censorship) have as they are increasingly fed into democratic LLM-enabled systems?

Project description

In the coming years, software systems powered by Large Language Models (LLMs), such as OpenAI’s ChatGPT or Google’s Gemini, will fundamentally change the way information circulates in modern societies. Beyond any doubts, citizens of both democracies and autocracies will heavily leverage LLM-ES alike. At the same time, authoritarian regimes exert significant control over the development of Large Language Models (LLMs) through a combination of regulatory measures, state-sponsored initiatives, and strategic censorship.

In autocracies like China and Russia, for instance, LLMs are developed under the auspices of state-controlled enterprises. This ensures that the outputs of these models align with the political narratives and ideological frameworks favored by the ruling elites. Moreover, these autocracies ban access from their territory to LLMs developed in democracies, as these democratic LLMs are perceived as vehicles for foreign ideologies and values. for foreign ideologies and values. For instance, both China and Russia have prohibited the use of OpenAI’s ChatGPT.

Against the backdrop of these developments, this project will raise two overarching research questions: (1) how, and with what consequences, are authoritarian LLM-enabled systems embedded in contemporary Russian society, and (2) what impact do authoritarian data (tainted by censorship) have as they are increasingly fed into democratic LLM-enabled systems. Through four interconnected work packages, the project will map the regulatory landscape of LLM-ES in Russia in comparison with Western democracies (WP1), conduct empirical audits to assess the outputs of leading democratic and authoritarian LLMs (WP2), engage in outreach and knowledge transfer activities by developing a propaganda literacy guide (WP3), and develop theoretical frameworks to understand the implications of the rise of LLMs for public discourse in both democratic and authoritarian societies. Ultimately, the project “Authoritarian AI” aims at (a) making democracies resilient to authoritarian influence mediated through LLM-ES and (b) revealing to citizens of authoritarian states how LLM-ES are affected by elite manipulation.

Project team

Prof. Dr. Florian Töpfl

Chair of Political Communication, University of Passau

Prof. Dr. Andreas Jungherr

Professor of Political Science esp. Digital Transformation, University of Bamberg

Prof. Dr. Florian Lemmerich

Professor of Applied Machine Learning, University of Passau

Florence Ertel

Researcher, Chair of Political Communication | University of Passau

Anna Ryzhova

Researcher, Chair of Political Communication | University of Passau

Julian Gierenz

Research Associate, Chair for Political Science, especially Digital Transformation | University of Bamberg

Andreas Einwiller M.Sc.

Research Assistant, Professorship of Applied Machine Learning | University of Passau