- Bots work with simple keyword searches and scan Twitter timelines or Facebook postings for specific words or hashtags.
- Risks of Social Bots: biased presentation of topics, set trends, disturbing factor
- Unmask Social Bots: Which follower does the presumed bot have and on which conversations does it participate? Does the description draw conclusions on a bot?
Use of social media as information source
In the question of a possible formation of opinion, one should first take a look at the internet usage for the purpose of information search. How many people obtain information about the latest topics through social media? The Reuters Institute Digital News Survey 2016 examined the use of news through social media and compared it on an international level. They proved that social media hold a place in the standard news repertoire of a considerable part of the population. The main reason for this is that it is a simple way of accessing different sources. Further points are the speed and the possibility to spread and comment on news.
In Germany, 31% of participants stated that social network pages were one of their regular sources of news – enough social media recipients for social bots. Although almost all users in Germany (98.5%) use other media, there still is the risk of getting a manipulated view. Overall, the average of those using social media as their only news source is low – in Germany 1.5%, in other countries 2.2%. However, if you consider the age group of the 18-24-year-olds, this number rises to4.9% (Germany and average of all countries). These 5% are particularly prone to being influenced by social bots.
What are Social Bots?
A bot, programmed with a fairly simple software, acts as a normal user in the network and is – depending on the software used – easy or really difficult to unmask. Bots work with simple keyword searches and scan Twitter timelines or Facebook postings for specific words or hashtags. If a bot finds the right words, it gets started. One example is the keyword #rapefugees, which emerged after New Year’s Eve of 2015 in Cologne. The bot answers postings and tweets or posts news that it has found on its search and that fit to its keywords. Bots can only maintain conversations with real users if they are complexly programmed. In order to do this, they copy prefabricated text blocks and put them together – sometimes more, sometimes less coherent. Usually, the goal is to spread as much information as possible and to receive approval. However, there are news bots, which are programmed on keywords but do not have the intention of influencing a specific topic, but merely to transport something that is worth knowing – for example when a bot reacts on the keyword “Sturmwarnung” (eng. Gale warning) and gathers all information on this topic.
At wired.de, Dominik Schönleben describes further types of bots that pursue different objectives:
A bot can flood the feed of a certain page or person with a certain statement. If it sees, for example, that a news page has expressed itself on a certain topic, it always posts the same counterstatements. Through links between the bots a high traffic emerges and real comments disappear below the noise.
Overloads can therefore impede formation of opinion, since they unilaterally share information and don’t create any real discussion. As a bot, however, they can be more easily detected because of the recurring information.
If enough users are talking about a certain topic, it can become a trending topic – e.g. on Twitter. If a bot network always uses the same hashtag, it is possible to determine the topic agenda. A topic suddenly seems to be bigger than it is and real users join a fake movement because they understand it as a real majority.
Trendsetter can lead to an incorrect weighting of individual topics. This can lead to a distorted image, especially for groups of people who are largely informed through social networks.
These bots are intended to distract individual users, so they spend as much time as possible with meaningless discussion. If two users are talking about a topic, such a program chimes in and writes inappropriate, extreme or even insulting arguments – again and again. Depending on the meaningfulness of the contents, however, auto-trolls can be identified easily.
One difference should be pointed out – social bots and fake accounts: Numerous networks host so-called fake accounts. These are user profiles of people that do not really exist. They are created, for example, to write fake reviews for products or services or to comment on different topics. The risk with fake accounts is that there might be real persons behind them who act on behalf of someone and are difficult to unmask. They do not cause as much traffic as bots, but they may cause greater damage because they are controlled by humans.
How to unmask Social Bots?
The more today’s technology advances, the harder it gets to unmask bots, since they apparently act like humans. However, there are some possibilities to identify such bots:
For example, check your followers on Twitter on a regular basis. Which follower does the presumed bot have and in which conversations does it participate? Does it have more than one conversation at a time and is there a major topic? Since when is the account active? Does it have a realistic number of tweets? Does the description draw conclusions on a bot?
Examples for Twitter-bots:
- account since may 2015 – approx. 650 days
- 39.127 tweets – around 60 tweets a day
- topics at random, tweets and retweets via Bing Search
- account since november 2016 – approx. 105 days
- 11.671 tweets – around 111 tweets a day
- mostly retweets of american comedians, politicians
With the "Bot or Not" service of Indiana University you can also check whether it is a bot: http://truthy.indiana.edu/botornot/
However, the results can also be wrong: One of the indicators is 50 tweets or more per day, for example – but this can also be exceeded by a very dedicated user.
Bots which act in forums or make comments on news pages can often only be recognised by their post histories and their content. A good way to keep bots out of forums are captchas. These are security queries which cannot be answered by bots, depending on the difficulty level.
Basically, the unmasking of bots is a time-consuming research work which cannot be carried out using only tools. An example is the Hamburg Social Security Agency, which has removed some 200 bots from their Twitter followers.
Impact on formation of opinion?
Given the fact that very few people inform themselves about news topics only through social media, the impact of social bots on opinion formation is rather to be kept in perspective. Anyone who still consumes news through other channels, for example via print or TV, at least has the possibility to get a balanced view – as long as he does not preselect news too strongly.
However, a risk could be that social bots are regularly created to put topics on the agenda, which are then picked up by the media. If those topics are not carefully scrutinized, the impression may arise that a topic is particularly important even though it is not.
In our next blog entry we will elaborate the technical background of social bots in more detail – subscribe to CURE newsletter and stay informed.