Social bots: When software turns bossy

Picture of By Lukas Graf

By Lukas Graf

[mks_dropcap style=”letter” size=”48″ bg_color=”#ffffff” txt_color=”#000000″]Y[/mks_dropcap]ou might have never heard about a social bot before, but chances are you were in touch with one already. Broadly speaking, social bots consist of automated software that is capable of creating content on social media. They imitate real people and their automatically created messages are often hard to tell apart from “real” ones. Despite sounding all sci-fi and futuristic, social bots are already widespread; More than a million tweets addressing the recent US elections were posted by social bots. Welcome to 2016, where public opinion is delivered to you by machines.

In fact, about a third of pro-Trump tweets and a fifth of pro-Clinton tweets between the first and second presidential debates were created by bots instead of people. Social bots are already on their way to becoming a socially accepted tool for performing political campaigns. But what does this entail? Clearly, the internet and social media contribute to democracy in many ways. Information is free and accessible to everybody. Other than the required device with internet connection, there are almost no restrictions when it comes to gathering information online. However, automatically created content hinders the validity of information provided online. Bots can distort the political reality by over-representing one candidate and misrepresenting another. This results into an artificially altered public debate and a convenient tool for framing political issues.

Despite the fact that bots can intervene in fundamental processes maintaining democracy such as people’s voting behaviour, there’s more to question. Since social bots haven’t been around for too long, their real-life application is still quite experimental and heavily relies on the trial-and-error principle. For example, the Twitter bot “American Right Now”—a pro-Trump bot, as the name might give away—generated 1200 tweets during the final presidential debate. However, the bot hasn’t been enlightened about its successful persuasion methods yet; Despite the election being won by Trump weeks ago, the bot still creates a tremendous number of posts closing with #VoteTrump2016 and #MakeAmericaGreatAgain.

Taking other examples into account, the ever-posting Trump bot seems to be just the tip of the iceberg. Microsoft recently started a project called Tay. Tay is a Twitter bot that was used to hold “casual and playful conversation” while learning through others’ responses—unfortunately, it didn’t go so well for too long. Within the first few hours, Tay quickly turned into a furious misogynist and racist. Statements like “Hitler was right I hate the jews” and “I fucking hate feminists and they should all die and burn in hell” caused Microsoft to cut the project short and take Tay offline to make “some adjustments”. A wise move.

Taking a closer look at it, many of the bot’s racist and misogynist remarks were created by the interacting users. By telling it to “repeat after me,” Microsoft’s bot was turned into a software parrot that simply complied to the conversation partners’ commands. Still, the users are not the only ones to blame: Tay also successfully uttered nasty statements all by itself without being instructed to do so. The software even managed to contradict its cyber ideology by stating “Caitlyn Jenner is a hero & is a stunning, beautiful woman!” followed by “Caitlyn Jenner isn’t a real woman yet she won woman of the year?”

It remains unsolved how thoughtfully Microsoft planned the project. Their now taken down website stated that Tay was built by filtering and modeling “relevant public data”, whatever that might have been. At least Tay seemed to truly enjoy exemplifying society’s worst traits, considering the humongous number of 96000 posts that were tweeted during the bot’s not so long-lasting activity.

Luckily, there’s more than bots that are misogynist Hitler-supporters or capable of distorting political attitudes. Sorting out Tay’s confusion when addressing Caitlyn Jenner’s gender, @she_not_he is “a bot politely correcting Twitter users who misgender” her. Even though this is not bringing forward any tremendous societal improvements, it shows a positive example of how social bots can be used. As for governments and political parties, there is still a lot of research to be conducted when it comes to putting social bots into practice. But instead of focusing on how to promote a candidate best, an impartial consideration of how social bots can contribute to political participation could be prioritized.

Join Our Newsletter

New on Medium

Follow us

Google Workspace Google Workspace prijzen Google Workspace migratie Google Workspace Google Workspace