| News | Interview | ChatGPT in schools and universities — a turning point for learning and teaching?

ChatGPT in schools and universities — a turning point for learning and teaching?

The quality of ChatGPT initially surprised even leading AI experts. It quickly became clear that the language model has an enormous impact on the German education system. But what does the future of education look like? An interview with Professor Ute Schmid and Professor Dirk Heckmann.

How do we deal with generative AI in an educational context? Bans are neither sensible nor enforceable. A laissez-faire attitude towards generative AI would
reduce the quality of education and — in the long term — cause social damage. In this area of tension, new ideas are developing in science and practice
to meet the challenges.

A discussion about forward-looking teaching and learning methods with generative AI.

ChatGPT challenges the core of our education system — this is how some media reports describe the situation since the language model became available for public use. Did you see this coming?

Schmid: I have to say that even we, as AI researchers, were astonished by the quality of ChatGPT at the end of 2022 when OpenAI went public with it. A broad public discussion about its impact in an educational context began very quickly, which was hardly the case with any other digital tool.

Heckmann: We also saw this in a legal context: Wow, something is coming our way. In the winter semester of 2022/23, we developed the first teaching formats with generative AI and also threw ourselves into the topic in research.

How much has learning, teaching, and testing changed since ChatGPT?

Schmid: You can see in both schools and universities that pupils and students are increasingly using the new generative AI tools to complete exercises or write essays. Teachers and lecturers are also increasingly using the technology, for example, to correct or create assignments. However, generating content with tools such as ChatGPT in an educational context must be supported and accompanied by didactic and media pedagogy. For example, strategies should be taught on how to assess the truthfulness and quality of AI-generated content. I am delighted that ChatGPT is finally enabling the topic of education to be widely discussed in the age of digitality. This has been overdue for two decades since the advent of search engines, Wikipedia, and machine translation.

The impetus at some universities was a ban on generative AI. Why can’t this be legally enforced?

Heckmann: A ban would interfere with fundamental rights, such as freedom of education and equal opportunities. Learners are allowed to use technological innovations that support their learning process. Furthermore, a ban cannot be enforced in practice because the use of AI, for example, in homework, cannot be effectively monitored. The argument that generative AI would violate the principles of good scientific practice also does not hold water. After all, there can be no established standards for dealing with AI in academic work after such a short time.

A ban is not a solution, but it will not work without regulation. What could a legal framework look like that supports the people involved in practice?

Heckmann: The university example shows that the law must reflect the reality of studying. My colleague Sarah Rachut and I recently published an article on this. Study and examination regulations should be adapted so that teaching content and assessment methods correspond to professional requirements and study objectives. Students should be able to use their intelligence wisely and, at the same time, utilise technological resources effectively. As learners increasingly engage with new technologies, the workload must also be adapted. A legal basis is needed for all of this, not least to make it clear that the integration of technological developments into higher education is politically supported.

Generative AI offers many opportunities for teaching and learning. A broad debate on the use of these and other technologies in the education system is long overdue.

Prof. Dr. Ute Schmid To the profile

The bidt is responding with the new research focus “Humans and Generative Artificial Intelligence: Trust in Co-Creation”. What does the term co-creation mean?

Schmid: Co-creation means that humans and generative AI systems jointly develop solutions for specific requirements, such as creating a text, an image, or a programme. The result is created via a cyclical, interactive process of enquiry, output, and adaptation.

To what extent does trust play a role in this interactive process?

Schmid: Trust has two facets. Firstly, it concerns people’s trust in AI systems. Trust is ascribed or built up or broken down on the basis of specific experiences, similar to the interaction between people. AI systems must, therefore, be designed so that people do not trust them naively but rather calibrate their trust. Secondly, there are requirements for the trustworthiness of AI systems.

Heckmann: This trust in AI is strengthened by the EU’s AI Act, which regulates the conditions of use. For example, manufacturers must disclose the data used to train their systems. The regulation also requires providers to communicate their products’ limitations and uncertainties.

Given this development, what skills are becoming important for teachers and learners?

Schmid: Teachers and students should have a basic understanding of AI’s central concepts and methods. The Dagstuhl Triangle, which an interdisciplinary group developed as a framework model, can be used as a guide here: Teaching technical and methodological skills provides the basis for the safe, confident and effective use of digital tools. These two aspects make it possible to deal with the effects on one’s life, society, and the environment. Therefore, educational processes must be designed so that people can use digital tools sensibly and, at the same time, necessary skills, such as percentage calculation, language skills or programming, are not lost.

What experiences have you had so far with AI in teaching?

Heckmann: One example from my teaching is the seminar “The Judge and his hacker”. The highlight was that the students worked on their topics with the help of generative AI. This included a critical review and documentation of the entire process. The students acted as quality managers.

Schmid: I have noticed that many students use ChatGPT and other generative tools to work on their programming homework. This can be useful support, but it can also prevent the acquisition of basic skills.

ChatGPT is a valuable tool because it can save time, for example. But the quality of the output has to come from people.

Prof. Dr. Dirk Heckmann To the profile

You both lead subprojects at the bidt within the new research focus. What are your projects focussing on?

Schmid: Our project focuses on the co-creation of programme code. We want to investigate how students acquire the necessary programming skills, even if they use ChatGPT. We will also further develop existing AI approaches to code generation to increase the quality of the generated code.

Heckmann: In our project, we are investigating the evolution of studies and examinations in co-creation. The project includes a roadshow in which a team member travels through Germany to dialogue with universities and ministries. The aim is to develop an exemplary examination procedure that breaks with the traditional concept of independent performance. Of course, we will also be cheeky enough to use generative AI for our assessments while complying with all quality standards.

Thank you very much for the interview!

bidt study highlights challenges for the education system

Better grades, thanks to ChatGPT? 42 per cent of pupils and 45 per cent of students aged 18 and over think that text-based generative AI would have given them better grades — without adequate performance. At the same time, however, half of users at school or university agree with the statement that using this technology has led to an increase in performance. A study by the bidt-Think Tank shows that around three-quarters of adult pupils (73 per cent) and students (78 per cent) have already used generative AI.

There is a need to catch up regarding risk awareness: No more than 50 per cent of the pupils and 56 per cent of students surveyed know that generated results can be factually incorrect. This makes it all the more important to highlight the limits of technology and promote the digital skills of learners and teachers. Most respondents find AI tools helpful, but there is often a lack of clear guidelines or controlled examination formats. A representative sample of 3,020 internet users in Germany were surveyed for the bidt study, including 252 school pupils and 981 students aged 18 and over.

Klicken Sie auf den unteren Button, um den Inhalt von Datawrapper zu laden.

Datawrapper laden