| News | In the media | WELT guest article: How ethically (un)problematic is ChatGPT?

WELT guest article: How ethically (un)problematic is ChatGPT?

IIn a guest article in the WELT, Julian Nida-Rümelin and Dorothea Winter discuss how ethically (un)problematic the speech synthesis system ChatGPT is.

© RareStock / stock.adobe.com

In a guest article “AI can write like Shakespeare, but it only copies” in the WELT, Professor Julian Nida-Rümelin, member of the bidt board of directors, and Dorothea Winter, research assistant at the Humanistische Hochschule Berlin, discuss how ethically (un)problematic the speech synthesis system ChatGPT is.

They note that “chatbots like ChatGPT are ultimately large, highly effective plagiarism machines without source citations”. According to the authors, they would pick up phrases that occur in certain contexts and reproduce them. The systems would mostly act syntactically correct, but would not have semantics. According to Nida-Rümelin and Winter, this leads to ethical problems, which they illustrate using different problem areas.

The first ethical problem is the so-called “framing effect” that can arise in ChatGPT. This refers to the presentation of an issue in which aspects are omitted or emphasised in order to achieve a certain interpretation or recommendation for action. ChatGPT would clearly framerate, according to the authors:

ChatGPT provides only one answer to the search term. This answer stands alone – without alternatives, without relations, without classification. This gives even digital natives the impression that this is the only relevant result.

The second ethical problem, according to the authors, is the threat of bias. In the context of ChatGPT, this is understood as a discriminatory distortion of perception, memory or opinion formation. This would be caused by a biased selection of data or their incorrect processing: “If the underlying data themselves show biases, they reproduce themselves in the chatbot’s answers”.

So how can ChatGPT be designed to be ethical? According to Julian Nida-Rümelin and Dorothea Winter, digital humanism can provide orientation here, which “places the strengthening of human authorship and creative power at the centre of the digital transformation”.

Chatbots like ChatGPT are indeed instruments “that can be used to realise meaningful economic, social and cultural goals, in educational and training practice, in situations where rapid orientation is required in a complex decision-making environment, as a representation of discourse situations and bodies of knowledge”. However, their use requires a critical examination of the reliability of the information, which calls for more transparency of the data basis.

“ChatGPT is looking for its equal, but remains only a well-functioning AI that can write operating instructions in the style of Shakespeare and sometimes even pretend to be a human counterpart. However, ChatGPT lacks something essential: a moral compass.”

Prof. Dr. Dr. h.c. Julian Nida-Rümelin

Member of bidt's Board of Directors, Professor emeritus of Philosophy and Political Theory | Ludwig-Maximilians-Universität in Munich

Dorothea Winter M. A.

Research Associate , Humanistische Hochschule Berlin