| Research Projects | Internal | AI in Journalism: The Impact of Generative AI on Objectivity and Dialogic Openness in Climate Debates
bidt background
KI als vertrauenswürdiger Journalist?

AI in Journalism: The Impact of Generative AI on Objectivity and Dialogic Openness in Climate Debates

The project is investigating how AI in climate protection can help to increase the willingness to accept messages and promote an objective debate with counter-arguments.

Project description

This project investigates the perception of AI-generated journalistic content by citizens using the example of climate change. In a series of experiments, qualitative surveys and a quantitative content analysis, it will be examined whether AI can act as an “honest broker” on the topic of climate change and promote dialogue and openness in polarised social debates.

Socially controversial and ideologically charged debates (e.g. on climate change, abortion, the gendering of texts, etc.) are characterised by the fact that two or more camps are comparatively irreconcilable. The more pronounced the attitudes towards an issue are and the more important it is for one’s own identity and values, the less willing individuals are generally to engage with members of the “other camp” or their opposing opinions or even to reconsider their attitudes. In extreme cases, this can lead to social groups drifting further and further apart, with little dialogue between them, making it difficult or even impossible to negotiate joint solutions. People with strongly negative attitudes in particular are often sceptical about journalistic reporting, perceive it as biased to their own disadvantage (hostile media bias) and have little trust in the objectivity of journalists.

However, initial studies now show that texts produced by generative AI are rated as more credible, factual and balanced than news written by human journalists. One explanation for this is that heuristics are activated and applied to assess the credibility of the source, as there is usually a lack of further information. In the case of AI, a “machine heuristic” is often applied – the idea that computers, software or other machine agents are fundamentally objective, precise and reliable. Consequently, information generated by them is considered more trustworthy than texts created by humans, as it is assumed that the machine is not pursuing any hidden interests of its own. This in turn helps to weaken the impression that media coverage is distorted to the detriment of one’s own position. This effect is particularly pronounced in groups with extreme and strongly held attitudes.

Taking this effect further, there is an opportunity to use news generated by generative AI to increase the willingness to engage with the arguments of the opposing side and enter into dialogue with the opposition camp.

The aim of the project is to empirically investigate in a series of experiments, qualitative surveys and a quantitative content analysis whether generative AI could act as a kind of “honest broker” of information and as a mediator between affectively polarised camps in social debates. This in turn would increase dialogue and the willingness to compromise in order to solve social challenges such as the implementation of climate protection measures and the energy transition.

Project team

Prof. Dr. Hannah Schmid-Petri

Member of bidt's Board of Directors | Chair of Science Communication, University of Passau

Daria Kravets-Meinke

Daria Kravets-Meinke

Researcher, bidt

Contributions