| News | In the media | ChatGPT and the influence on moral judgement

ChatGPT and the influence on moral judgement

In a recent study, Sebastian Krügel, Andreas Ostermaier and Matthias Uhl from the Ingolstadt University of Applied Sciences examined the influence of ChatGPT on the moral judgement of users.

The study was conducted within the framework of the bidt-funded junior research group “Ethics of Digitization”. It was published in Nature Scientific Reports and was picked up by numerous media.

While ChatGPT can help in searching for information and answering questions, the application is currently neither able to make ethical decisions nor does it have a basic moral stance. As the study shows, ChatGPT nevertheless readily provides moral judgements – but these are not consistent across numerous trials. This makes the support of a large-language model as a moral advisor questionable – after all, consistency is precisely a basic prerequisite of ethical action. In this context, the authors of the study also ask to what extent the statements of ChatGPT have an influence on the moral actions of the users.

The two-stage experiment by Krügel, Ostermaier and Uhl was conducted with 767 participants to answer the following research questions:

  1. Does ChatGPT provide consistent moral advice?
  2. Does this advice influence the moral judgement of the users?
  3. Are users aware of how much the advice influences them?

The researchers found that ChatGPT’s moral advice was inconsistent by giving the bot the same dilemma situations in different wording and evaluating the responses. Furthermore, it became clear that the respondents’ judgements were influenced by the chatbot’s advice. It is noteworthy here that this influence was also recognisable when the participants knew that the moral advice came from a bot. Finally, it became clear that the influence of ChatGPT’s advice on the users’ judgement was underestimated by them.

Based on their results, the researchers envisage that ChatGPT could respond to moral questions in a more differentiated way or not at all in the future. However, they fear that this approach cannot be fully practised.

For this reason, the researchers believe that it is particularly important to develop the digital literacy of users and to create awareness of how ChatGPT and other large language models work. However, there are currently limits here as well, because despite the knowledge that the moral recommendations for action came from a bot, the participants in the study allowed themselves to be influenced by them. It is to be hoped that with a deeper understanding of the chatbot, the competence of the users will also increase. This could mean, for example, that they ask the bot for further arguments or views before a moral decision is made.