Aida Sanchez, Fatima Ali, Jordan Higgins, Marcella Via, Paz Marquez Arellano, Renjani Pusposari, Bruna Maria Do Rego, Aroa Molero Gonzalez, and Sonia Reveyaz
Political microtargeting is threatening media pluralism and democracy at a global level, with Facebook influencing political campaigns in 66 countries, half of which are European states. Due to their opaque nature, algorithms are manipulating voters’ behavior and politicians are well aware of this trend. In times where the hearts and minds of people are gained via posts, campaign budgets are invested in colonizing the digital arena. Within this context, investigative journalists are the ones who need to hold algorithms to account to protect users’ data, assuring the respect of democratic values.
An Algorithmic State of Mind
In the post-digital age, voters face the perils of disruptive technologies through political micro-targeting where big data and algorithms monitor citizens’ online behavior and social media usage to influence voters. Political micro-targeting has been described as a “technique of political communication based on the use of data and analytics to tailor messages to a subgroup or individuals via different channels” with views to foster relationships with prospective voters and supporters (Bodó et al., 2017). At face value, micro-targeting might be deemed subtle and negligible, but when executed successfully, can generate political shockwaves.
Such systematic micro-targeting which pigeonholes voters on the basis of demographics, political inclinations, personality traits, and behavioral data, infringes upon democratic principles, privacy rights, and freedom of expression. As tech giants such as Facebook continue to amass data and employ algorithms to personalize user experience, audiences are often manipulated via targeted political advertising. Companies that harvest our online activity, likes, dislikes, interests, searches, and behavioral patterns are quickly changing the dynamics of modern-day politics.
The aftermath of the 2016 US presidential elections and Brexit referendum, set against the backdrop of the Cambridge Analytica scandal, has posed difficult questions concerning the increasing adoption of data-driven campaign strategies. In practice, scrutinizing the ethics and operating principles of the current digital ecosystem requires more in-depth investigations on how user data is accessed and used by platforms through machine learning.
Dirty Tricks: Big Data, Bigger Elections, and the Role of Investigative Journalism
While research on political behavioral targeting (PBT) has been conducted in the US, there is still the need to understand micro-targeting practices in a European context.
Unsurprisingly, the upcoming EU elections are at risk of being subject to micro-targeting mechanisms. Dutch MEP Marietje Schaake argues that “to avoid manipulation through social media, we need transparency on who funds advertisements. This should include algorithmic accountability of the business models of social media companies so that we know what the impact of conspiracy theories and hyperpolarization are on our democracies.” Furthermore, the fact that elections are now digitized shows a need for stronger cybersecurity, campaign staff training, and complete transparency to ensure democratic principles are upheld.
This is where a modern form of investigative journalism, often referred to as “algorithmic transparency reporting”, comes into play. Jonathon Albright from the TOW Centre for Digital Journalism stated that investigating algorithms needs to become a core feature of the journalistic process. Nick Diakopoulos, professor at Northwestern University, argues that algorithms need to be scrutinized when they begin to disrupt existing social norms and values, with specific reference to the violation of user privacy. As algorithms and artificial intelligence play an increasingly vital role in shaping democratic elections, it is paramount that journalists adapt to enhance their ability to hold power to account. Although news organizations such as ProPublica have been reporting on the role algorithms play in society for several years, it is a practice yet to be adopted by the journalistic mainstream. In order to accommodate the integration of algorithmic transparency reporting into traditional newsrooms, journalists must enhance their existing skill-set to combat a growing threat to a democratic society. According to the Data Journalism Handbook, these skills include technically challenging code inspection and reverse algorithm engineering, as well as more rudimentary techniques such as crowdsourcing data for inspection. Developing these skills in the modern newsroom offers a means of informing citizens about micro-targeting techniques, preventing them from being manipulated during political campaigns, and refining the democratic process as a result.
Facebook recently launched a new tool which allows users to view all advertisements currently running on the platform, as well as to see who the intended audience is, and why they were shown a particular ad. According to Facebook, everyone is able to access the “number of times the ad was viewed, and demographics about the audience reached including age range, location, and gender”. While skeptics have argued that this is simply Facebook attempting to silence their critics, it is a positive step in the fight against micro-targeted manipulation nonetheless. While Facebook turns its sights towards the upcoming EU elections, Zuckerberg is still dealing with the aftermath of 2018’s Cambridge Analytica scandal. After it was revealed that the political consulting firm had harvested personal data from nearly fifty million Facebook users for political use, Zuckerberg was intensely scrutinized. Facebook shares dropped dramatically as people’s trust began to wane and the issue of micro-targeting took center stage.
Zuiderveen Borgesius, Frederik, et al. (2018) highlight how “microtargeting enables a political party to, misleadingly, present itself as a different one-issue party to different people”. This presents a critical threat to democracy, as political parties’ true intentions can be hidden in order to appeal to opposition audiences.
A Threat to Democracy and Media Pluralism
In a multi-stakeholder discussion held by UNESCO in 2019, Giacomo Mazzone from the European Broadcasting Union stated that the algorithm has undermined media pluralism by preventing audiences from interacting with opinions that are antagonistic to their own and by repeatedly providing them with homogenous content. This is at odds with UNESCO’s principles of human rights, openness, accessibility, and multi-stakeholder governance. Accordingly, Pier Luigi Parcu from The Center for Media Pluralism and Media Freedom mentions three new kinds of threat brought about by voter profiling and micro-targeting. Firstly, Parcu understands that the role of media gatekeepers in monitoring the quality and diversity of content has slowly diminished. Secondly, the polarisation of opinion – or a filter bubble effect – led by reduced pluralism will harm democratic dialogue. Finally, the rise of political propaganda fuelled by hate speech and mis/disinformation poses serious, short-term threats to targeted minorities and vulnerable social groups.
The Future of Micro-Targeting
So, is there any short-term solution to this issue of political micro-targeting? The Utrecht Law Review urges policymakers to enhance the transparency of all micro-targeted advertisements used by political parties. With the aid of algorithmic transparency reporting and increased general transparency amongst social media platforms, the public can access information related to who funded certain political advertisements and how the desired audience was selected. As a result, internet users can avoid political manipulation of their data and be better informed to participate in a democratic society.
However, looking forward, the future role of micro-targeting during election campaigns is hazy. Although pressure is being placed on social media platforms to increase transparency regarding their advertising policies, internet users are still vulnerable to micro-targeting. The responsibility lies in the hands of policy-makers, social media platforms, and media outlets who, as Dennis G Wilson from the University of Toulouse has argued, can themselves employ AI to combat its malicious use by others. The use of AI and algorithms to manipulate voters and influence their opinions remains a threat to democracy. However, with greater awareness of this issue and investigative journalists’ continued efforts to highlight it, the threat can be largely minimized. It may not be possible to completely eliminate micro-targeting from political campaigns, however, it is possible to inform the masses and reduce the danger it poses to democracy.