| News | In the media | FAZ guest article about the language AI ChatGPT

FAZ guest article about the language AI ChatGPT

In the FAZ, Alexander Pretschner, Eric Hilgendorf, Ute Schmid, and Hannah Schmid-Petri explore the question of what follows from the language AI ChatGPT.

© Miriam / stock.adobe.com

In a guest article published in the Frankfurter Allgemeine Zeitung, Prof. Alexander Pretschner, Prof. Eric Hilgendorf, Prof. Ute Schmid and Prof. Hannah Schmid-Petri from the bidt board of directors explore the opportunities and risks posed by the speech AI ChatGPT and how human-machine interaction can succeed.

One thing is certain: the speech synthesis system ChatGPT generates texts of remarkable quality. ChatGPT and similar speech AI models are further assistance systems for humans that are comparable to a machine translation, a spell checker or even a manual internet search. This leads to many questions for use – for example, in schools and universities, in medicine or in journalism.

According to the authors, the real potential of ChatGPT – like most machine learning applications, by the way – lies more in the area of human assistance. It is therefore important to design the use of the new system.

Machines cannot assume responsibility; rather, it is reserved for humans.

Problems that can occur during the use of the new technology are already known from other contexts of cooperation between humans and machines. These include “lack of transparency, uncritical trust in the performance and decision-making of technology and the self-inflicted loss of one’s own abilities”. Specifically for dealing with ChatGPT, the authors cite as particular challenges the failure to cite sources and “a questionable handling of other people’s intellectual property, which makes the technology a powerful software for plagiarism production”.

Attention should therefore first focus on the question of “who has to check the AI-generated text or code proposals: for correctness and completeness, for the absence of inadequate text passages and indication of relevant sources and, if necessary, also for linguistic clarity”. To clarify this, new assignments of responsibility may be necessary. If an AI-generated text is no longer carefully checked by humans, “the problem of civil or even criminal liability could arise”.

Therefore, it is important to “do justice to one’s own responsibility depending on the context, to weigh up opportunities and risks and to develop regulations – from mere etiquette to self-commitments and sector-specific standards of conduct to new legal requirements – as well as a new digital and media competence”.

Prof. Dr. Alexander Pretschner

Chairman of bidt's Board of Directors and the Executive Commitee | Chair of Software & Systems Engineering, Technical University of Munich | Scientific director, fortiss

Prof. Dr. Ute Schmid

Member of bidt's Board of Directors and the Executive Commitee | Member of the Bavarian AI Council | Head of Cognitive Systems Group, University of Bamberg

Prof. Dr. Dr. Eric Hilgendorf

Member of bidt's Board of Directors, Chair of the Department of Criminal Law, Criminal Justice, Legal Theory, Information and Computer Science Law | Julius-Maximilians-Universität Würzburg

Prof. Dr. Hannah Schmid-Petri

Member of bidt's Board of Directors | Chair of Science Communication, University of Passau