Project description
This project investigates the perception of AI-generated journalistic content by citizens using the example of climate change. In a series of experiments, qualitative surveys and a quantitative content analysis, it will be examined whether AI can act as an “honest broker” on the topic of climate change and can promote dialogue and openness in polarised social debates.
Socially controversial and ideologically charged debates (e.g. on climate change, abortion, the gendering of texts, etc.) are characterised by the fact that two or more camps are comparatively irreconcilable. The more pronounced the attitudes towards an issue are and the more important it is for one’s own identity and values, the less willing individuals are generally to engage with members of the “other camp” or their opposing opinions or even to reconsider their attitudes. In extreme cases, this can lead to social groups drifting further and further apart from each other, engaging in little dialogue with each other and thus making it difficult or even impossible to negotiate joint solutions. People with strongly negative attitudes in particular are often sceptical about journalistic reporting, perceive it as biased against them (hostile media bias) and have little trust in the objectivity of journalists.
However, initial studies show that texts produced by generative AI are rated as more credible, factual and balanced than news written by human journalists. This can be explained, among other things, by the fact that heuristics are activated and applied to assess the credibility of the source, as there is usually a lack of further information. In the case of AI, a “machine heuristic” is often applied – the idea that computers, software or other machine agents are fundamentally objective, precise and unbias. Consequently, information generated by them is considered more trustworthy than texts created by humans, as it is assumed that the machine is not pursuing any hidden interests of its own. This in turn helps to weaken the impression that media reporting is biased against one’s own position. This effect is particularly pronounced for groups with extreme and strongly held attitudes.
If we take this effect further, there is an opportunity to increase the willingness to engage with the arguments of the opposing side and enter into dialogue with the opposition camp with messages generated by generative AI.
The aim of the project is to investigate in a series of experiments, qualitative surveys and a quantitative content analysis whether generative AI could act as a kind of “honest broker” of information and as a mediator between affectively polarised camps in social debates. This in turn would increase dialogue and the willingness to compromise in order to solve social challenges such as the implementation of climate protection measures and the energy transition.
Project team
Prof. Dr. Hannah Schmid-Petri
Member of bidt's Board of Directors | Chair of Science Communication, University of Passau