| Research Projects | Internal | Legal uncertainty through generative AI? Reform considerations for the promotion of system trust at universities. Academic framework for fair AI regulation in exams
bidt background

Legal uncertainty through generative AI? Reform considerations for the promotion of system trust at universities. Academic framework for fair AI regulation in exams

In addition to the legal analysis of higher education and examination law, the project also addresses the question of how universities could and should react and adapt to this new technological reality.

Project description

The use of generative artificial intelligence (AI) in higher education and research has the potential to transform academic life fundamentally. AI offers numerous opportunities to increase efficiency and foster innovation. Yet, it simultaneously presents new legal challenges in the context of examinations: How can the use of AI in exams be regulated without compromising students’ independent performance? What legal frameworks must be established to ensure equal opportunities, fairness, and transparency? How does academic freedom relate to these issues?

This project examines the legal requirements for using AI in higher education. Blanket bans on AI as a tool in examinations, as some universities assume, fall short, given that AI technologies are already widespread in academic and professional fields. Precise and practical reforms of examination regulations are necessary to enable responsible AI use in academia. The project aims to develop regulations that uphold the integrity of academic assessments and reflect the real-world application of AI.

Universities, in particular, face the challenge of striking a balance between promoting innovative technologies and maintaining integrity and trust in the educational system. The project critically evaluates existing examination regulations and proposes concrete reforms that address the new technological reality. Transparent and comprehensible rules in academic and examination regulations are intended to provide legal certainty for students and faculty. In doing so, the project seeks to answer how AI use in exams can be fair and how universities can better prepare students for the digital workplace. The ultimate goal is to establish and strengthen system trust in higher education regarding AI use and to develop a forward-looking regulatory framework that harmonises the potential of AI with legal and ethical requirements.

Project team

Prof. Dr. Dirk Heckmann

Member of bidt's Board of Directors | Chair of Law and Security in Digital Transformation, Technical University of Munich

Antonia Becker

Researcher, bidt