Social Bot Pandemic

Social bots have coexisted with humans since the early days of social media. However, we still lack a precise and agreed-upon definition of what a social bot is. This is partly because they are studied by multiple communities, but also because the multifaceted and dynamic behavior of these entities leads to different definitions, each focusing on different characteristics.

Computer scientists and engineers tend to define bots in technical terms, focusing on features such as activity levels, full or partial automation, and the use of algorithms and AI. The existence of accounts that are both algorithmically and human-controlled has led to more nuanced definitions, and the term “cyborgs” has been coined [3].

Sociologists are instead usually more interested in the social or political consequences of bot use, and they define them accordingly.

CACM: A Decade of Social Bot Computing

The Facebook war room in Menlo Park, California, on Oct. 17, 2018, ahead of Brazil’s runoff election. The company was working to reassure the public about fake accounts, disinformation and foreign interference clouding the site’s debate about the election.

Social bots are widely used for various purposes, both useful and dishonest [13]. Most of the existing works focus on detecting malicious social bots. The reason is simple if we take into account the classification proposed by Stieglitz et al. [30]. According to their intentions and ability to imitate humans, bots can be

benign, not trying to imitate people (for example, news and recruiting bots, bots used in emergency situations);
by criminals who ruthlessly try to impersonate a human being.
Detecting the first category of bots is not a problem, and scientists have devoted much of their effort to detecting the second, in part because malicious bots are interfering with our online ecosystems.

Indeed, the wide range of actions that social bots perform and the low cost of creating and managing them in large numbers open up the possibility of deploying entire bot armies to wage information warfare, artificially inflate the popularity of public figures, and manipulate opinions. At the beginning of the sudden surge of interest in automation and deception, several studies were conduct measuring the scale of the social bot pandemic. The results are alarming. The average bot presence was estimat at around 15% of all active Twitter accounts in 2017 [31] and 11% of all Facebook accounts in 2019 [38].

When political or economic interests are at stake

The presence of bots increases dramatically. A 2019 study found that 71% of Twitter users mentioning trending US stocks were likely bots [8]. Similar results were found for the presence of bots in online discussions about cryptocurrency [24] and in the COVID-19 pandemic “infodemic” [14].

Other studies have focus specifically on the political activity of bots, concluding that bots play a role in strategic information operations orchestrat in the lead-up to numerous world events, as shown in Figure 1.

Despite their role in political debate in the countries highlight in the figure, bots do not always have real influence. In fact, there is still no widespread scholarly consensus on the impact of social bots, with some studies reporting a key role in increasing the spread of misinformation, polarization, and hate speech [27][29] and competing findings that bots do not play a significant role in these processes [32].

The ubiquity of social bots is partly due to the availability of open source code. In this regard, Bence Collani report exponential growth, leading to over 4,000 GitHub repositories containing code for deploying Twitter bots in 2016 [22]. Other investigations have shown that this trend has not stopp. In fact, by 2018, researchers had identifi over 40,000 publicly available bot repositories [1]. The picture that emerges is one in which social bots are a favorite tool for deception and crowd manipulation. These findings are support by platforms where information operations have been conduct, namely Facebook [b], Twitter [c], and Rdit [d], which have suspend tens of thousands of accounts involv in coordinat activity since 2016.

Social Bots and How to Detect Them

Figure 1. Social bot pandemic

Caption 1 : A look at the situation around the world. Bas on materials from 39 countries where political manipulation by social bots has been document in the scientific literature… Although the list of articles is illustrative and not exhaustive, it nevertheless allows us to understand the spread of the social bot pandemic.

Given the role bots play in influencing online ecosystems, numerous methods for detecting and removing them have been propose. In addition to widespread coverage in the news mia. Today, new research on bot characterization, detection, and impact assessment is being publish at an impressive rate, as shown in Figure 2. If this trend continues, more than one new paper will be publish per day by 2021. Perhaps more importantly, the spe at which new papers are being publish implies a massive global effort to stop the spread of the social bot pandemic. But why all this effort? To answer this question, we first take a step back to the early days of social bot detection.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top