| News | Interview | “We want to understand where AI tutors can be put to good use”

“We want to understand where AI tutors can be put to good use”

With OneTutor, students can deepen their knowledge in a fun way – via chat and quizzes. The AI assistance system was developed by students at the TU Munich. But does it actually improve learning success? The “AIffectiveness in Education” research project, led by the Bavarian Research Institute for Digital Transformation (bidt), is investigating this.

We want to understand where AI tutors can be put to good use
© Adobe Stock / Otseira

The Chair of the bidt Board of Directors, Professor Alexander Pretschner, at whose department the tool was developed, talks in an interview about the potential of AI-supported learning at universities and existing gaps in research.

Five students at your department developed the AI tool OneTutor as part of a work placement. As a computer science professor, what convinced you about the idea?

Alexander Pretschner: To be honest, I was a little sceptical at first about how well an AI-based assistance system for students would work. But the students proved me wrong. The tutoring system provided excellent answers right from the first tests. Because the students were so fascinated by the topic, they wanted to develop the application further as part of their Master’s theses. In mid-November 2024, they then rolled out the tool in my large computer science lecture with over 1,000 students. From then on, interest in the tool exploded!

Today, OneTutor is used at nine different universities – from Munich to Bayreuth and Augsburg. How does OneTutor work and to what extent is artificial intelligence actually at work in this tool?

Alexander Pretschner: The software currently has two functionalities: on the one hand, students can ask the OneTutor system questions about the content of a particular lecture. The answers are generated by an AI – currently the ChatGPT language model from Open AI. OneTutor does not draw its knowledge solely from the “knowledge” of ChatGPT. Instead, lecturers can upload slides and transcripts of lecture recordings to specialise the text corpus. If you only asked ChatGPT, you would receive examples and explanations that do not necessarily match the lecture content.

And the second functionality?

Alexander Pretschner: The second functionality is the quiz functionality. The AI can independently create questions about the lecture and generate suitable answers, both free-text questions and multiple-choice questions. Students can use these questions to recall and deepen their knowledge of the lecture content. We have decided that lecturers can and should curate the quizzes in order to ensure quality assurance.

The tool takes on the classic tasks of lecturers: the dialogue with students, the creation of knowledge questions. Hand on heart: how well does the AI perform?

Alexander Pretschner: I used the tool myself for a semester in my lecture – but the idea was never to replace the lecturer, but to supplement them. The quality of the chat responses is impressive, with only a few cases of complete nonsense. Negative feedback from students is in the per mille range. For lecturers, OneTutor saves an incredible amount of work – especially when it comes to creating questions. Anyone who has ever created a quiz knows how much work is involved. With AI, you can create 100 questions in an hour and curate them too.

Are there any other advantages?

Alexander Pretschner: The tool helps lecturers to identify gaps in students’ understanding by allowing them to look at the questions that students have asked anonymously over a certain period of time. In this way, lecturers can identify content that they have clearly not yet explained sufficiently. Whereas in the past in large lectures you could at best guess which topic was not understood, you now have a powerful feedback tool in your hands.

Under the leadership of bidt, the participating Bavarian universities are researching how effectively the AI tool influences the learning success of students. What research gap do you want to close with this?

Alexander Pretschner: We already know that students enjoy using AI assistants in general and our system in particular very much and intensively during their studies. However, this does not automatically mean that the system works effectively. We don’t know whether students really learn better with it than before. Do they stay with it longer, do they have more fun, can they memorise things better? We are interested in these educational research questions in our accompanying research project so that we can further develop the system based on evidence.

What results are you hoping for from the accompanying research project – and are there already initial indications of the tool’s effectiveness?

Alexander Pretschner: My hope is that by the end we will have understood where the systems are useful and where they are not. The answer will not be: Yes, it works – or no, it doesn’t work. It will depend on the context in which the tool is used. Whether it is a large or small course. Whether it’s a lecture for first-semester students or advanced students. Whether it is a language-based science such as law or a technical science such as computer science. Whether the science is about factual questions or judgements. I would also assume that the tool works more efficiently in an anonymous 1,000-person lecture than in a small course with ten people. We need to test these hypotheses.

How is bidt proceeding in the accompanying research project and which partners are involved?

Alexander Pretschner: With the help of researchers from the field of education, bidt has developed questionnaires that are answered by users. We correlate the results with the utilisation intensity of the partner universities. At the moment, we are proceeding on a random basis: Four universities of applied sciences and five universities are using the system in five to ten courses each.

Trust also plays a major role in the use of the tool. To what extent does the project fit in with the bidt’s new research focus “Humans and Generative Artificial Intelligence: Trust in Co-Creation”?

Alexander Pretschner: There is a lot of overlap here! I’m particularly interested in how trust in the system will change over time. I myself was also rather sceptical at the beginning, but now I have a completely different view of the potential. People are often very demanding when it comes to evaluating such tools; we expect the technology to always be 100 per cent correct. When we talk to people, on the other hand, we are much more generous, and even teachers, doctors and professors don’t always tell the 100 per cent truth!

In my view, unfounded fear of such technologies is misplaced – after our initial experiences, we can assume that our system will do no harm and we must now simply try out and understand where and how well such systems work. Otherwise, we run the risk of the world passing us by.

Prof. Dr. Alexander Pretschner To the profile

For us, however, such systems are always intended to complement teaching; they are not intended to replace human professors and tutors.

What’s next for OneTutor?

Alexander Pretschner: The students are currently in the process of turning OneTutor into a start-up. I think that’s very worthy of support. The area of application is not just limited to colleges and universities, discussions are already taking place with potential partners from industry and the authorities. What I find fascinating about this example is that we have the opportunity to build a technology that has a direct impact, that we can analyse immediately and where we can feed findings directly back into the technology. In my opinion, we in Bavaria and Germany should have the courage to roll out such systems on a large scale and test them in parallel much more often. Otherwise, applications from other countries will be just around the corner in a few years’ time.

Thank you very much for the interview!

The interview was conducted by Anja Reiter