| News | Guest contribution | When robots write newspapers: Will the public trust AI journalism?

When robots write newspapers: Will the public trust AI journalism?

The advent of generative AI presents both opportunities and challenges for journalism. Professor Hannah Schmid-Petri is leading a research project to investigate whether the news audience places more trust in AI journalism than in human journalism, and whether AI could therefore be used to resolve political disputes.

Generative artificial intelligence (AI) will revolutionise working methods in many areas of business and society. The work and products of journalists are also facing such changes. Like other knowledge workers, they benefit from ChatGPT and co. as tireless helpers, idea generators or editors. With regard to the tasks of journalism — creating publicity, criticising and monitoring institutions, but in particular, providing trustworthy information for citizens to form their opinions — the specific question arises as to how the news audience will react if journalistic articles on current topics are explicitly labelled as products of generative AI. Are robots regarded as credible and trustworthy authors who work professionally, provide reliable information and report objectively? While most media companies have a positive attitude towards AI innovations and have developed guidelines for dealing with them, which primarily emphasise transparency to the outside world, it is still largely unclear what effect the visible use of generative AI has on the audience.

Human journalists and the mistrustful audience

Socially controversial and ideologically charged debates (e.g. on climate change, the implementation of the energy transition, abortion, gender rules, etc.) are characterised by the fact that two or more camps are comparatively irreconcilable. The more pronounced the attitudes towards an issue are and the more important it is for one’s identity and values, the less willing individuals are generally to engage with members of the “other” camp or their opposing opinions or even to reconsider their attitudes. Suppose citizens already have a (pronounced) opinion on a current topic. In that case, they tend to perceive journalistic reports about it as biased in favour of their opinion camp — even if these reports strive for neutrality and balanced representations on both sides. This so-called hostile media bias has been proven many times and is particularly pronounced among people with extreme views. It works against argumentative debate in public discourse because citizens distrust the balance of reports.

AI as a journalist: Trust in the machine without self-interest?

But how will people judge media reports whose authors identify themselves as generative AI? A project in the bidt research programme “Humans and Generative Artificial Intelligence: Trust in Co-Creation” will address this issue. Initial studies in the USA have produced some interesting findings. These suggest that people express less mistrust regarding the objectivity and credibility of reports whose authors are labelled AI. The researchers suspect a plausible reason for this greater trust in AI than in human journalists may be due to the assumption that AI operates like a machine — without emotional or self-interested influences, following plain, rational logic. If citizens apply this machine heuristic, they perceive AI reports as “unencumbered” by political sentiment or the intention to manipulate the audience. Consequently, AI journalism may be regarded as more credible and trustworthy than human journalism.

Research about AI journalism at the bidt

If this finding were to be confirmed – and this is precisely what the bidt will be researching in the coming years – it could have interesting implications for heated political controversies and the objectivity of discourses: if AI journalism is experienced as a trustworthy source of information in different camps, it should increase the willingness to consider balanced representations and thus also to hear and consider the arguments of the respective opponents. AI reporters who are perceived as neutral could potentially help to overcome polarisation and deadlocked political disputes.

Given that research is still in its infancy, it is still too early for such optimism. Another reason to exercise caution is that a significant portion of the distrust some people have towards human journalism has been fueled by media criticism from populist circles. These actors have a strategic interest in undermining trust in the media as a mediator in social understanding. It is, therefore, to be feared that they will soon extend their agitation to AI journalism. In this respect, research — also at the bidt — must monitor how the population’s trust will develop in the coming years. This will be influenced by the extent to which AI appears more frequently in the news and the extent to which it becomes the subject of political controversy and the target of populist attacks on the media and democracy.

Prof. Dr. Hannah Schmid-Petri

Member of bidt's Board of Directors | Chair of Science Communication, University of Passau