Can religions trust Big Tech and governments with Big Data and AI?
There are accusations of Big Tech and governments of using big data and AI technologies against religions. If true, can religions trust Big Tech and governments with big data and AI? Our analyst Anthony Buck aims to find out in his latest article.
‘Information is power’ raises the question of trust
‘Big data’ is a growing area of concern worldwide. But what is it? Massive datasets compiled by tracking software. Some track your social media posts, others record your Google searches. These are just the tip of the iceberg. The amount of information that can be acquired is mind-boggling. This is big data. Too big for any human to sort or use easily. This is why AI technologies are often produced to analyse and manipulate big data. While big data and AI may seem like magic, they are actually powerful technologies produced and deployed by ‘Big Tech’ firms (e.g. Google, Facebook, Amazon) and by governments. Religions across the world – like most features of society – now must engage with the reality of big data and AI. Setting aside some of the huge, ethical questions, we can boil down an essential issue to this: trust.
Trust is required when there is so much power in play: the power to generate information as well as the power to employ that information for particular – not always transparent or accountable – ends. This alone requires trust. The question is not if Big Tech and governments are using big data, it is how and for what purpose. Yet, for religions, as much as for individuals, the answers to how and why often remain obscured. Moreover, religions, Big Tech, and governments often have conflicting interests. With the magnitude of powers involved, the risk for discrimination or outright oppression is great through AI surveillance or algorithmic manipulation. So, the question is, can religions – which are communities built on shared trust – trust Big Tech and/or governments with the power of big data and AI? Maybe or maybe not, let’s look at some real world examples and see if we can decide whether trust has been earned.
Big data and AI: a threat to religion?
Digital technologies including big data and AI are not neutral. They shape people in a quasi-religious way. I have explored this question elsewhere,   so I will not belabour the point here. I will just note that it is an open question whether the nature of these digital technologies themselves can be trusted by religions or whether they have a tendency to undermine or co-opt religious visions of the world. Having been designed by Big Tech and governments with different aims in mind, they may in fact threaten the hopes and intrinsic human dignities and rights that religions emphasise.
Yet, this just reinforces the need to consider the biggest, most looming question: can Big Tech and governments be trusted with using the information gained by widespread digital surveillance for good and not evil? After all, what might be good for Big Tech might not be good for religion.
What, then, are evils that could be done? First, as more generally, there is the threat of manipulation and/or commodification of religious people, communities, and institutions, if they accept increasing surveillance by Big Tech and governments. However, it is worth adding that in fact, mass AI surveillance threatens the democratic rights to freedom of religion, speech, organisation, etc. that most European and global democracy promise to maintain. Second, Big Tech and governments can use these technologies to influence religious communities to buy certain products or to accept or reject certain views. Given the history in Europe of weaponising religious categories, such as the Nazis did to both Jews and Christians, the danger posed here looms large on the mental horizon.
Yet, this is not hypothetical. Religious discrimination already occurs. An Oxford Internet Institute study warns that Big Tech, such as Facebook, uses ‘discrimination by association’ against correlated religious beliefs in order to promote, hide, or adjust the price of content and adverts.  Sandra Wachter offers this example: “Affinity groups could be seen as equivalent to protected groups on the basis that the affinity group (e.g. ‘interested in Muslim culture’) is defined against an explicit protected attribute (e.g. ‘religion’) or a strong proxy for that attribute (e.g. ‘headscarf wearer’).” In fact, Catholic nuns are already pushing back against Big Tech for discrimination in their AI algorithms on behalf of vulnerable members of the public. 
Unfortunately, governments too are using big data and AI to dominate, control, and even perpetrate violence, such as in China against Uyghur Muslims and Christian house churches,   in India against Muslims, and in Indonesia taking the milder form of a ‘misguided belief’ reporting app. Meanwhile, the EU and US use big data and AI against religious communities to generate ‘risk profiles’ in part based on religion. Yet, these risk profiles can easily be used to discriminate and control, not simply flag possible terrorists. This has potential to increase as Big Tech and governments work together to replace physical passports with ‘efficient’ digital biometrics. Further, Amnesty International reports the EU is currently allowing Big Tech to sell its big data to the Chinese government and argues that a failure to rectify the situation makes the EU complicit in China’s abuses.
The risk to religions, then, is not just theoretical or potential when it comes to Big Tech and governments developing and utilising big data and AI. It is an actual and present reality. They are already using these technologies against religions. Many people assume they can trust Facebook or Google with their information, likewise the UK or Spanish governments, at least to a minor degree. Yet, it is hard to trust an entity or set of entities that are actively using their power against you. Therefore, it is a true and present violation of trust. In fact, the invitation of religion into the conversation evidenced in the Rome Call for AI Ethics suggests not merely a sudden interest in ethics, but an awareness by Big Tech and governments that they either have already transgressed or may soon be perceived to have transgressed religions’ and public trust in their use of big data and AI.
Is big data and AI an opportunity for trust or a growing threat?
Thus, the external risks to religions are present and force our question of trust. But this also raises the question of whether there is an internal risk to religion with big data and AI. That is, do religions use these technologies too? And if so, can religions trust themselves with these powerful technologies? And can people trust religions with these powerful technologies? I take up this important question in an upcoming article. But for now, we must return to concluding our thoughts about whether religions can trust Big Tech and governments with big data and AI.
Trust, as they say, must be earned. So, as things stand, the answer to ‘Can religions trust Big Tech and governments with big data and AI’ could itself be phrased as a question: have they earned it? As far as I’m concerned, the obvious answer to that question is ‘no’.
This article was written by Anthony Buck and reflects his personal analysis and opinion, rather than those of EARS.
Interested in similar topics? Go to our Dashboard and receive free updates.
 Sandra Wachter, ‘Affinity Profiling and Discrimination by Association in Online Behavioural Advertising’, Berkeley Technology Law Journal 35.2 (2020 Forthcoming), 7. Accessed 16 Dec 2020. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3388639#