Aurelija Gackaitė, Rugilė Jakučionytė, Zahaid Rehman, Miriam Siemon, Xuerui Wu and Anastasia Yakubovich
Propaganda takes many shapes and forms, but in recent history, it has been masked under the guise of civic participation and political awareness on social media. This is clearly evident in the case of the 2016 US elections and the now-famous (or infamous) Russian troll factory that was seen actively trying to spread misinformation against the Hilary Clinton campaign, adding support for President Trump in the process.
Since then, several former ‘trolls’ from the Russia-based Internet Research Agency (IRA) – seen as one of the major actors in disseminating misinformation in the campaign process – have stated that they were not lured into working for IRA to push any agenda, whether related to civic participation or propaganda; they were simply incentivised through a better salary than most other writing jobs.
There is a very thin line between civic participation, free speech and online propaganda. While theoretically, all voices in cyberspace are equal, some views tend to get more traction than others. The concept of ‘viral content’ and filter bubbles enable posts widely shared by a number of accounts to get more reach; posts shared show up on more timelines, creating an exponential increase with each like/share. The fact that a number of writers only working for the prospects of better pay can be used to spread misinformation on this scale speaks volumes about the issue. Blurred lines between free speech and propaganda, especially if the proponents of the latter are organised, tend to use this snowball effect to devastating effect – the spread of misinformation online is clear evidence of this.
There is not enough available data to properly examine just how effective Russia’s troll factory was, some would even argue that the trolls had limited influence on the voters, who had access to all avenues of information. But while the full scope of the sheer number of the trolls present and spreading misinformation online during the US elections is still unfathomable, there is no denying that the spread of misinformation took place. And this is where the crux of the problem truly lies; if a group can use platforms that encourage free speech and a variety of opinions (at least in theory) to set agendas and spread misinformation, is that also not another form of curbing free speech?
Admittedly, the link is tenuous between the two – one would imagine that spreading misinformation could not possibly lead to the subversion of free speech, but the simple truth is that it does. In the cacophony of opinions in the digital world, the loudest voice does tend to get heard more often, and the sheer number of trolls spreading misinformation is a tactic designed towards drowning out other opinions.
And the biggest problem is that there is no clear solution; while the international community largely does believe that Russian involvement in the US elections is likely, there is no way to decisively prove this, and no solution to stop it either. Platforms looking to put an end to this cannot viably address the root of the problem – even if suspected accounts are blocked, there are many that slip under the radar. And at the end of the day, digital companies such as Facebook and twitter tend to be reactive instead of proactive, acting on complaints, which means that accounts surreptitiously posting content with misinformation are likely to do so until they are caught out.
The problem does not even end there; much like the mythical hydra monster that sprouts more heads where some were cut off, platforms are fighting a losing battle against increasing fake accounts managed by a small number of people that keep popping up where others were blocked. And even if we assume that this tactic can be effective; does the average person find the idea of platforms acting as gatekeepers of information to be acceptable? Which political ideas are allowed to be posted online, the difference between truth and lies and pushing information through a funnel that removes whatever the company perceives to be ‘undesirable’ is not what these gigantic tech companies were ever supposed to be in charge of.
The German anti-hate speech law, is an example of legislators looking to get ahead of the issue of problematic opinions on online platforms, but failing miserably and instead exacerbating the problem. The law makes it compulsory for platforms to remove content with hate speech or face a huge fine as penalty. Right wing leaders argue that they are being muzzled, their supporters see them as martyrs, platforms see this as a task (rightly so) beyond them and independent observers are naturally concerned with the arbitrary mechanism now put in place to monitor hate speech. The case here is also much the same as with trolls and misinformation; where one account is blocked, more can pop up.
No one seems to acutely understand the magnitude of the problem, and even if they do, finding a solution is much more complicated. State level agreements to curb disinformation within their geographical boundaries would be one solution, but as in the case of Russia and numerous other countries (India and Pakistan as another example), turning the proponents into weapons can prove to be much more profitable for myopic self-serving interests.
A concerted effort then, between like-minded states to root out the problem within their countries first, and eliminate any external threat from hackers and trolls that are looking to use the digital space to spread misinformation is the only practical way forward. This is by no means easy, and will require platforms such as Facebook and twitter to cooperate with states in removing content that is clearly centred on promoting hate or deliberate misinformation. Note, that the keyword here is ‘cooperate’ and unilaterally removing objectionable content on part of the tech companies is not constructive.
Finally, and perhaps most crucially, instilling critical thinking skills in the average populace through education and socialisation is also fundamental; if recipients of information question its veracity based on sources and quality, there is a higher likelihood of digital spaces sifting out misinformation. This is also where civic participation takes a central role in fighting against propaganda – the people can choose to ignore falsified information, argue against its validity and give the counter-narrative. Occupy the vacuum that the trolls are looking to fill and there will be nowhere for them to go.
Trolls looking to interfere in the US Presidential elections 2016 made the world realise the destructive quality of information – they all but proved that the pen (read keyboard) is mightier than the sword. The lethargic global response to the problem also makes it inevitable that this will happen again. The question is: will we be better equipped to handle it in the future?