AI and privacy in Europe
The rise of Artificial Intelligence (AI) represents an unprecedented intrusion into our private lives. While Europe has seen advances in both regulation and ethics to protect personal data, these seem inadequate to take back control from the governments and private industry who control AI, and with it, our privacy.
This article was written in preparation for our round table on Artificial Intelligence
With the rise of Artificial Intelligence (AI), concerns about the risks that it may pose have also become more and more significant. Voice assistants, search engines, speech and face recognition, advanced robots, autonomous cars, and drones represent not only digital advances that government and private industry are managing, but also developments that we as individuals are experiencing in our everyday lives. A crucial aspect of our lives that AI affects is our privacy. But why is that? Unlike other or earlier innovations, AI has a far greater ability – both power and speed – to use personal data in ways that can shape what we think and do, and therefore, intrude into our private lives. This makes it ever more possible for governments and private organisations to collect and use unprecedented amounts of personal data on us from every facet of our lives: each time we use our mobile phones, watch TV, or check the search engine.
Harvesting private lives
AI’s intrusion has become an increasingly significant part of everyday lives in Europe. The digital transformation of political campaigning in Europe, particularly around social media, highlights this. Despite public scares, such as the 2018 Facebook–Cambridge Analytica data scandal, political parties across the world, and indeed in Europe, are relying more and more on harvesting personal data to influence voters. Recent developments in political campaigning by European populist political parties of Spain, Italy, France, and the United Kingdom have specifically focused on using social media technology – more and more powered by AI – to better target and influence voters’ beliefs about religion and Europe itself. During the European Parliament election campaign in 2019, for example, Podemos, Vox, the 5 Star Movement, Lega, Rassemblement National, France Insoumise, the Brexit Party, and UKIP, all published and promoted Eurosceptic messages on their Twitter accounts. Researchers at the Universitat Jaume I, Castellón de la Plana, Spain found that these messages questioned the maintenance of “the foundational values of Europe, such as equality or solidarity between different people and countries.” One party, Rassemblement National, tweeted that by letting in refugees, the European Union was also exposing Europe to terrorism. The party, according to its Twitter campaigning, was the only political option that will protect its voters from immigration and its consequences.
Protecting personal data
The European Union (EU) has been working hard – or harder than others – to address these challenges. Its 2018 General Data Protection Regulation (GDPR) ushered in new standards worldwide on a person’s right to his or her own data. The Luxembourg-based Court of the Justice of the European Union (ECJ) 2018 ruling, soon after GDPR’s introduction, shows the protection these standards may offer to Europeans. The ruling asked Jehovah’s Witnesses to comply with GDPR’s data privacy requirements during door-to-door preaching: “A religious community, such as the Jehovah’s Witnesses, is a controller, jointly with its members who engage in preaching, for the processing of personal data carried out by the latter in the context of door-to-door preaching,” judges said. “The processing of personal data carried out in the context of such activity must respect the rules of EU law on the protection of personal data.” For a start, preachers from Jehovah’s Witnesses would need to get people’s consent before they take down their personal details, so that they may be digitally collected and stored. To be sure, the EU’s GDPR seeks to protect religious as well as secular information as communities, institutions, and organisations in Europe increasingly go through digital transformation.
Another recent push to better deal with the challenges of AI in Europe was by the Roman Catholic Church (RCC). The RCC issued the ‘Rome Call for AI Ethics’ in February 2020, demanding stricter ethical standards on the development of AI, to “protect people,” including the “weak and underprivileged.” It proposed adherence to six broad principles: transparency, inclusion, responsibility, impartiality, reliability, and security and privacy. According to the RCC, this ethical development and use of AI was possible through the creation of new forms of regulation on “advanced technologies that have a higher risk of impacting human rights.” On privacy specifically, the RCC called for “AI to respect users’ privacy,” that is the responsible design of AI that respects people’s right to their personal information. The Vatican’s AI ethics initiative was met with initial success, with tech giants IBM and Microsoft agreeing to sign on its new initiative. While the RCC hopes to increase the number of companies that are ‘baptised’ into its AI ethics initiative, it is also hoping to collaborate with universities across the globe to promote ethical AI guidelines. The concern for AI was again repeated by Pope Francis in November 2020, when he asked believers around the world to pray that Artificial Intelligence always served mankind, adding that this would only be possible if AI was harnessed correctly: “Indeed, if technological progress increases inequalities, it is not true progress. Future advances should be orientated towards respecting the dignity of the person.”
Privacy vs power
But is the RCC’s approach – the creation of ethical standards – enough? For many academics and activists, the only way to deal with AI’s encroachment into our private lives is to take back control of our personal data. While existing standards, such the EU’s GDPR, protect how our personal data is kept and used, it also lets governments and companies use this personal data to monitor and shape our knowledge, attitudes, and behaviours. Dr Carissa Véliz, Associate Professor at the University of Oxford’s Institute for Ethics in AI, argues that digital technology’s greater power to harness our personal data – often without our permission or even awareness – also allows it to steal our power to make free choices and hands it over to governments and companies. In ‘Privacy is Power’, she argues that “tech companies are harvesting your location, your likes, your habits, your relationships, your fears, your medical issues, and sharing it amongst themselves, as well as with governments and a multitude of data vultures. They’re not just selling your data. They’re selling the power to influence you and decide for you.” So, while Europe’s approach of protecting our personal data may be better than others, it, like every other region and country, is unable to protect us from how influential that use may be. Crucially, this influence – increasing with the burgeoning trade in personal data – extends to things we considered invariably private, and therefore, inaccessible to digital technology. Dr Véliz argues that this unprecedented extent of influence compels us to recognise the power of data better: “… people should protect their privacy because privacy is a kind of power.” If we continue to give too much of our personal data away, we may even risk sliding into authoritarianism. “For democracy to be strong, the bulk of power needs to be with the citizenry, and whoever has the data will have the power. Privacy is not a personal preference; it is a political concern,” she insists.
An uncertain future
Are the admirable or well-intentioned efforts of the RCC, or even of the EU’s GDPR, significant enough to thwart the use of our personal data by governments and private industry? It seems unlikely. These efforts appear to not be enough for people in Europe to take back control, given both policy and activism need to not only regulate or ethically steward AI but also transform the very institutions that we take part in every day. But is it really possible to take back control of our personal data, as Dr Véliz argues? Can people transform their political relationship with governments and companies? And if so, how should we do it?
Interested in similar topics? Go to our Dashboard and get free updates.