AI vs. GDPR: Finding the Balance Between Ethics and Innovation

Isadora Tostes de Souza Barros, Busra Islek, Ruya Ince, Abeera Junaid Aslam, Réka Zsuzsanna Szitter, Eleftheria Katsi, Marta Soliño, Ceren Yaycili and Oyinkansola Awolo

The strict rules of the General Data Protection Regulation (GDPR) are likely to have a serious impact on the competitiveness of the Artificial Intelligence (AI) sector in Europe. The European Commission wants to assure customers and foreign investors with its EU strategy for AI that aims for the creation of European AI models that operate “ethically”. Although an ambitious AI strategy, it disregards the complexity of the new technologies and could potentially leave the EU behind in the AI race.

Picture 1 (1)

Photo by James Pond on Unsplash

AI: a Worldwide Trend

The use of AI is on a rise around the world, especially in the US and China. The number of industries that use machine learning, Big Data collection, and analytics to produce quantifiable statements and to develop plans that can easily be acted upon, are rapidly increasing. It is predicted that worldwide spending on AI will reach $57.6 billion by 2021.

The field of AI research, which dates to 1956, emerged at a workshop at Dartmouth College in the US. But it has gained substantial importance today, due to the increasing computational power, stronger commitment of researchers to AI research, higher amounts of data generated by people and lastly due to the growing implementation of AI in various fields. Aiming to solve specific problems and provide solutions, AI is now applied to numerous fields, ranging from healthcare, finance, security, robotics to advertising and many others.

How GDPR Affects Innovation in AI

The growing influence of AI in different sectors also has consequences for personal data of people, especially with regards to privacy and data breaches, as foreseen by data protection organizations and the EU. The entry of the regulation into application thus requires industries in the EU to carry out their AI operations within the limits of GDPR. But GDPR was created to tackle issues of privacy in a very broad sense and fails to address the complexity of the new technologies, which could, therefore, have significant implications on AI research and innovation in Europe.

AI and advancements in connectivity enable computers to make intelligent decisions in order to perform diverse tasks. The basic premise on which AI operates is to learn by collecting, processing, and linking huge amounts of data, a large chunk of which might be personal data. Also called machine learning, this principle simply means that the more data that is available to be consumed, the better and more credible the AI is. On the other hand, this massive collection of data on which the AI relies on, is problematic from a privacy perspective. That is why the EU has put these activities under a data protection microscope with the EU-wide data protection regulation.

Picture 2 (1)
Photo by Alex Knight on Unsplash

The Struggle of EU Companies

Articles 12 and 13 of the GDPR states that companies need to be transparent about how data is protected and processed, and they need to give access to this information to any party that might be interested in having it. On paper, this requirement may seem easy enough to fulfill, but in reality, machine learning tools, and especially newer tools such as deep learning are so-called “black boxes”, which means that the way an algorithm functions can hardly be explained. Neither can a certain effect of the algorithm be followed back to a certain data point. This would, at best, require human intervention on a technology whose main purpose is for the machine to act and decide alone, limiting the complexity of AI-powered decision-making. Having to add human supervision to machine learning makes a sector which is already expensive even more costly. Not to mention the financial risks that would arise from failing to fully comply with the rules of GDPR for companies which work with AI technologies.

The Right to Delete and Erasure: Its Impact on AI

Companies need to have the consent of those people whose data they use in order to comply with the GDPR. Companies must also delete the data of any customer of at least 16 years of age if he or she demands so. The latter rule totally opposes the main idea of ‘consuming, storing, memorizing, learning and connecting Big Data to provide efficient solutions’ on which AI is based on. As more data and large sample sizes enable AI to be smarter and more human-like and to predict decisions for the future in a more accurate manner, this rule has a direct consequence on innovation because it minimizes data to train AI with. What also remains unclear is how to manage withdrawn consent within AI, as the opaque functioning of the technology makes it nearly impossible for engineers to alter the algorithm to dismiss the data in question.

Transparency at What Cost?

Complying with such a standard of transparency as called for in GDPR would mean that companies working with AI technologies would have to not only reveal their algorithms, but they would also have to disclose all the data that was used in their machine learning processes, as well as how that data was gathered. This is not only difficult to achieve, neither is it ethical.

All these factors combined pose a huge challenge to the development of Artificial Intelligence in Europe, while the US and China, followed closely by South Korea continue to innovate rapidly. According to a recent report by World Intellectual Property Organization on technology trends, the EU is already far behind institutions and companies in China, the US, and South Korea – which dominate the list – with only four universities out of the top 167 universities and public research institutions for patents.

Does EU Need to Reconsider GDPR?

Nick Wallace and Daniel Castro from Center for Data Innovation, a leading think tank with a specific focus on data, technology, and public policy, urge the EU to reconsider GDPR especially when dealing with companies that develop and use AI: “The EU should simplify the GDPR to focus exclusively on preventing harm to consumers, instead of needlessly limiting the use of data at the expense of data innovation.” They also state that the regulation does little to protect the customers in terms of AI and could even harm them in some cases, as GDPR could increase the use of less accurate algorithms.

For now, it seems that the difficulty of implementing the requirements of the GDPR in such complex technologies, combined with the costs that are likely to arise from these, could possibly hinder the development of AI in the EU member states. Europe will lag in the development of Artificial Intelligence and the industries supported by AI, potentially losing the AI race to other strong competitors such as China and the US.

3 thoughts on “AI vs. GDPR: Finding the Balance Between Ethics and Innovation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s