Social Bots – Technical Features and Possible Applications

15. March 2017, Sebastian Peter - Marketing, Social Media

In an age in which the digital world has more and more impact on social, political and economic structures in our lives, a comprehensive view helps to gain a deeper understanding of the phenomenon of social bots. Just as important as an understanding of the technology is an understanding of the intention that drives developers and users of bots.

 

 

History, present and future: Of printing presses, bots and artificial intelligence

“Those who don’t know the past can’t understand the present and can’t shape the future.”
Helmut Kohl


Since the invention of the printing press, people have been using new media for purposes of propaganda and the implementation of their own interests. Some examples:

  • The printing press in the Thirty Years’ War
  • Radio and TV in the Third Reich
  • Internet and social networks in the US elections 2016 and the Ukraine crisis
  • Artificial intelligences to implement a new world order in the 22nd century

Are social bots nothing but a tool for manipulation in our digital age? Bots have been in action since the rise of the internet as an information medium of the 21st century. Its uses, however, have become much more extensive and profitable for developers and users since the rise of the so-called “web 2.0” that gave every user a voice.

But what is the technology behind social bots? We would like to elaborate on some of the background facts in the following paragraphs.

Bots and their technology

Social bots are programs with the primary task of feigning a human identity. Popular programming languages like “Python” are used to develop bots. Simple ones can be developed even with little knowledge under guidance, complex ones can be purchased. For a simplified overview, we have divided bots into three technology levels. These categories do not constitute an official classification; they are meant to give an easy, comprehensible overview of different standards. Undoubtedly not all programs can be assigned to these categories due to their architecture.

Simple bots: Simple bots can be created with just few lines of code and simple algorithms. This way, whole bot armies could theoretically flood social networks without temporal or financial effort. Still, these bots show deficits in creating comprehensive, coherent user profiles due to their simple structures. They employ simple keyword analyses and react to predefined phrases or topics through repetitive patterns of behaviour (retweets, shares, likes or standardised content). “Simple bots” are thus easy to recognise.

Semi-intelligent bots: Bots on a higher level of technology exhibit a higher complexity due to complicated algorithms. A lot of skill is needed for the programming. In contrast to the “simple bots”, bots on this technological level already work with content analyses and use databanks. They are thus able to create coherent accounts and hold sensible conversations. The quality of communication is therefore significantly dependent on the algorithm and the underlying databank.

Artificially intelligent bots: artificial intelligences based on neuronal networks or other future technologies form the supreme discipline. The question here is: can bots on a technological level like that still be called “bots”? Dimensioning factors of this line between bots and their more advanced AI counterparts could be the degrees of controllability and tractability by humans. It is most likely that it will be impossible or highly difficult to distinguish an artificial intelligence from an actual human.

Good Bot – Bad Bot. Possible applications under social aspects

Bots are programs. They do not distinguish between “good or bad” yet. That makes the bot’s human operator the one to decide about its assignment and sentiment. But who decides what is good, what is bad and what is in between? This question highlights the importance of the societal and legal complexity of bots next to their technical requirements. These are some of the possible applications:

Politics:

  • Manipulation of political opinions and debates
  • Initialisation and intensification of political conflicts
  • Strengthening of own political position by feigning a high follower count

Economy:

  • Manipulation of stock market prices by targeted distribution of false reports
  • Mass manipulation of buying behaviour through social bots in influencer marketing
  • Infiltration of companies by “peer phishing” in social networks. A company’s key persons are deliberately messaged on social media platforms and used as a weak point through social engineering

 Social environment:

  • Automated social engineering of groups of people (e.g. clubs, friends)
  • Feigning popularity through manipulation of follower count or other indicators (likes, shares etc.)
  • Manipulation of societal trends

In reverse, social bots can also influence the aforementioned areas positively.

The use of bots in social engineering

A special use of social bots could be social engineering. In short, social engineering is the targeted influence of people to manipulate their patterns of behaviour.

Example:

A fake account (human or bot) tries to establish a trusting relationship with its victims on the basis of feigned similarities (friends, place, school/work, clubs etc.) that it has previous established within a social network. Through these and other similar sociodemographic data, the fake account tries to create the impression of being an old acquaintance or a new neighbour looking for social contacts. After having gained the trust of one or more users, the real manipulation commences. Through seemingly arbitrary questions and simple small talk, it collects information on the victim and furthers the gained trust. That way, the fake account gets access to sensitive data like names, addresses, phone numbers, passwords or even explosive details. This knowledge can be used to hack, to blackmail or simply to bully the victims depending on the fake account’s intention.

At the moment, these practices are carried out by actual humans instead of bots because social engineering requires a lot of personal interaction. Consequently, only a small circle of victims or individuals are being targeted. If this approach was adapted for bots or an AI, however, those tools could perform harmful interactions automated and on a high level. The number of victims and the related damage could rise exponentially.

How to deal with bots in social media monitoring

Social media monitoring can be very helpful in detecting bots, especially when monitoring is carried out by humans and supported by monitoring tools. Simple keyword-based patterns of behaviour by simple bots can be collected through pre-defined search filters in the respective monitoring tool. A human analyst can spot patterns in these identical findings through targeted questions; the patterns provide the framework for guidance in social media strategies.

Example: Questions when analysing social bots

  • How many users share or post identical content?
  • Which users sharing this content exhibit inconsistent or incomplete user profiles?
  • Are there similarities between the profiles?
  • Are the patterns of behaviour suspicious?
  • How many tweets does a user compose to which topics in which period of time?

An analyst’s catalogue of questions is a lot more complex. Still, this short introduction shows how important social media monitoring is when dealing with social bots.

 

Sources:
http://www.tab-beim-bundestag.de/de/aktuelles/20161219.html
https://www.youtube.com/watch?v=_u6JJAZDbQ0

 

0 comments

New comment

Your CURE Blogger-Team

Marco Feiten

Sophie Schäfer

Levi Veiga

Markus Kühn

Philip Haberstroh

Björn Zosel

Sebastian Peter

Katharina Veit

Latest posts

Lernen sie uns näher kennen

CURE Insights

Stay always up-to-date and inquire about us in our blog or follow us on Twitter.
 

References

Customers are still the best advertisement.
Here you find an extract of our reference customers.