Trolls, Bots…And Terrorists?

Maartje van Leeuwen, Nora Romanova, Max Årstad Knutsen , Alaa Jbour,  Erkan Yildiz and Aslihan Okyay

Social media is a powerful weapon, it can be used as a tool to shape opinions, attitudes and even behaviors. In today’s information environment, there are numerous opportunities to share radical content online. Proclaimed states and terrorist groups have quickly adapted the new information environment by using social media for disseminating propaganda to achieve political objectives, plan operations and reach large audiences to gain their support.

Bots and the Spread of Illegal Content Online

One of the most worrying phenomenon on social media platforms is the spread of computer generated content by ‘bots’, the most commonly used medium for bot activity is Twitter. Twitter has certain benefits as it reaches large audiences and is easy to use, while it keeps anonymity and allows a fast recovery from suspended accounts. Terrorist bots have been designed to misinform and manipulate social media users. These accounts can disseminate, share and retweet information in great amounts and with great speed. Furthermore, these automatic generated profiles are capable of interacting with each other, thus appearing more credible for Twitter users. Researchers have uncovered that terrorist groups have adjusted their twitter behavior in such a way as to ensure that the deactivation of their initial posts do not affect the spread of their message. And furthermore they have adjusted their media techniques in order to achieve maximum viral reach. They no longer need the traditional gatekeepers of the media to spread their message. This means that these gatekeepers have lost their protected status.

1

EU Lays out Guidelines for Tech Companies

After recent terror attacks in Europe and beyond, the European Union has recognized a link between the terror attacks and content being shared on social media platforms. The EU has stated that the social media companies, such as Facebook and Twitter, are partly responsible for the attacks due to hosting terrorist propaganda on their platforms. The EU is putting pressure on internet companies such as Facebook, Google and Twitter to be quicker in their removal of illegal content. The EU Commission wants to hold tech firms legally responsible for failing to take down harmful online content and the Commission reports that terrorist content should be removed within one hour of it being flagged by Europol, EU’s policy or local law enforcement. While these guidelines are not binding, they can be used in courts as a legal reference.

A recent discourse on the public responsibility of platforms has raised the question whether platforms should be held legally responsible for the content that is shared on their platforms. Some argue that platforms share the responsibility for content moderation if the content is harmful towards its audiences, however it is important to remember that internet users themselves are responsible for the spread of the content by sharing of fake news, hate speech and extremism.

Educating the Internet Users

UNESCO in November 2017 published a report on the impact of online extremism on youth and women. The report concludes that, although the threat of extremism can be found online, the solution should be found online as well as in the form of youth-engagement and Media Literacy courses. It is made clear that exclusion is not the solution, and that rather inclusion should be sought. The role of the EU in the matter of limiting spread of illegal content, seems to be to find a way to offer opportunities for decision-making, youth-engagement and Media Literacy training, rather than extending the stringent anti-hate speech laws and disinformation campaigns. 

 

Reference

Klausen, J. (2015). Tweeting the Jihad : Social Media Networks of Western Foreign Fighters in Syria and Iraq. Studies in Conflict & Terrorism, 38(1), 1–22. https://doi.org/10.1080/1057610X.2014.974948

Shaheen, J. (2015). Network of terror:How Daesh uses adaptive social networks to spread its message. Riga: NATO Strategic Communications Centre of Excellence.

Niklewicz, K. (2017). Weeding out Fake News An Approach to Social Media Regulation. Brussels: Wilfried Martens Centre for European.

Benkis, J. (2016). New Trends In Social Media. Riga: NATO Strategic Communications Centre of Excellence.

Citron, D. K. (2018). Extremist Speech, Compelled Conformity, and Censorship Creep. Notre Dame L. Rev, 1035-1070.

Drozdiak, N. (2018, March 2). Facebook, Google Get One Hour From EU to Scrub Terror Content. New York, USA.

Helberger, N., Pierson, J., & Poell , T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 1-14.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s