Social bots in social networks influence political discussions through the automated distribution of content, the imitation of human interactions and the amplification of certain opinions or topics. Bots can therefore shape discourse by setting trends or manipulating opinions, which can lead to a distorted public perception.
One of the main reasons for the use of social bots – or bots for short – in politics is their ability to disseminate large amounts of information quickly, around the clock, relatively autonomously and with relatively little effort. By amplifying certain opinions or topics (e.g. through the automated dissemination of posts with certain hashtags), this content gains more visibility on social media. By creating artificial trends or faking a high interaction rate with certain content, bots can create and reinforce the impression of broad support or rejection for political ideas or individuals.
The influence of distorted public opinion in social media is comparatively less than when these distortions reach the mass media, as social networks are not the central sources for forming public opinion due to their algorithmically curated content and a highly fragmented user base. However, opinions, content and topics in social media – especially with many “likes” as indicators of the popularity of a statement – can also serve as orientation and inspiration for journalists. As a result, topics or pointed statements from social networks can find their way into the mass media, such as daily newspapers, and thus have an indirect but potentially major influence on the formation of public opinion.
One challenge is therefore to develop effective strategies to minimise the negative impact of social bots. However, this is a cat-and-mouse game between those who use bots for covert campaigns and those who want to expose these bots. Social media operators have drawn up rules on what functions bots may have and for what purpose [1]. Accordingly, the operators are also fighting against the use of problematic bots. However, the problem with bot detection is that a person is often behind a bot and controls half or all of this bot account. In other words, one person can operate a large number of semi-automated accounts and thus blur the line between bot and human account, making it more difficult to detect covert bot campaigns [2].
While bot activity is detected in a great many political discourses [3]a study on the 2017 federal election shows that there were hardly any political bots on Twitter (now X) among the followers of the seven major parties [4]. The influence of (highly automated and political) bots is more likely to be a phantom [5] rather than a serious problem, as most bot campaigns are orchestrated by humans. Bots are used in social media to steer public perception in one direction. However, this is done by organised groups of people who also take advantage of automation and other benefits of technological change.
Social media is trying to get to grips with this problem. They need to distinguish between widespread and harmless bots that are paid, transparent and labelled to automatically reinforce opinions and topics – and harmful bots that try to distort public opinion by dishonest means. This requires continuous cooperation with governments, journalists and the users themselves in order to find a healthy balance between the advantages and disadvantages of automated actors in social networks.
Comparability with analogue phenomena
Public opinion can also be influenced by analogue means. For example, paid media such as posters, flyers, etc. can be used to spread opinions or suggest a certain public opinion. Propaganda is also an analogue means that may serve the same purpose as bots. People can also be paid for demonstrations or the like, making a movement appear larger than it actually is. Even in unpaid media contributions – for example, television interviews – people can make statements about public opinion that do not necessarily reflect the actual opinion of the population.
One of the main differences between bots and the influence of social media on political discourse is their scalability. In the digital space, a large number of bots can be programmed and managed cost-effectively for a specific purpose, which would involve significantly higher costs and/or effort in the analogue world. Another important difference is anonymity in social media: in the digital space, people can hide behind fictitious names and images to protect their anonymity. It is very difficult for outsiders to identify such an account as a bot, troll or real person.
Social relevance
Bots can influence political discussions on social networks by automatically disseminating content and amplifying certain opinions. This has the potential to distort public perceptions and raises questions about the integrity and authenticity of digital discourse. Developing effective strategies to minimise the negative impact of bots is challenging as the technology is constantly evolving. Ongoing co-operation between different actors is necessary to find a balance between the benefits and risks of automated actors in social media.
Further links and literature
Recommended reading:
- Weizenbaum, J. (1978). Computer power and human reason: From judgement to calculation. San Francisco.
- Woolley, S.C./Howard, P.N. (eds.). Computational propaganda: Political parties, politicians, and political manipulation on social media. New York 2019.
Sources
- X. https://help.twitter.com/en/rules-and-policies/x-automation [29.04.2024]
- https://www.theguardian.com/technology/2018/jan/19/twitter-admits-far-more-russian-bots-posted-on-election-than-it-had-disclosed [29.04.2024]
- https://firstmonday.org/ojs/index.php/fm/article/view/12392/10743 [29.04.2024]
- https://www.tandfonline.com/doi/abs/10.1080/10584609.2018.1526238 [29.04.2024]
- https://www.bpb.de/themen/medien-journalismus/digitale-desinformation/290555/social-bots-zwischen-phaenomen-und-phantom/ [29.04.2024]