How to build trust into big data and AI

Reading time: 6 minutes

How to build trust into big data and AI

You can’t put the sugar in a cake after it’s baked. What if the same is true to have ethical and trustworthy big data and AI technologies? What if we have to bake trust into them from the start? Anthony Buck’s third weekly comment in his series on trust in a world of big data and AI explores these questions.

This article was written by Anthony Buck and reflects his personal analyses and opinions, rather than those of EARS.

Can you rebuild trust after it has been broken? Can you have trust if only one side has all the power, and even seems to prefer you not know exactly how much or what kind of power they have? These are the questions that have arisen in light of the new age of digital technologies including big data and AI.

We have already asked ‘Can religions trust Big Tech and governments with Big Data and AI?’ and ‘Can religions trust themselves with big data and AI?’ And we have seen that the powerful digital technologies of big data and AI are terrifying in their scope and in their potential for misuse, whether they are in the hands of Big Tech (e.g. Google, Facebook, Amazon, etc.),[1] [2]  governments,[3] [4] [5] or even religions.[6] [7]  But not only is there an ongoing risk of abuse;[8] there has already been some concerning and unethical behaviour.[9] [10] [11] [12] [13] [14] [15]

Ultimately, this brings us back to a question of trust. But the question is not how Big Tech, governments, and religious organisations can make sure people still feel like they can be trusted. The question that we should really be asking is this: how can anyone, any group, any company, any government, any religious institution actually be and stay genuinely trustworthy with the immense powers of big data and AI? The fact that many already assume untrustworthiness only highlights how important this second question is.[16]

Is trustworthiness even relevant? 
Why would people use technology they do not trust? On the one hand, because they have little choice. The modern world is built around digital technologies,[17] [18] [19] and only more so since the pandemic has pushed businesses and churches alike towards digitising how they run and what they offer.[20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] On the other hand, because some of these technologies are in use in non-digital spaces.[36] Kuzzma, which sells facial recognition and AI analytics to churches, employs their technology in the physical venues.[37] Most people who go to church know God is watching – even other people – but many would be surprised to learn so was AI tracking software. Likewise, a group of nuns protested Amazon’s sale of its facial tracking technology to the US government.[38] In effect, regardless of trust, people have little choice on whether they use these technologies, but even less on whether some organisation uses these technologies on them.

Moreover, big data and AI are not going anywhere.[39] [40] They will likely only get more powerful with time.[41] The potential to use them for good – maybe even great good, like detecting and treating cancer [42] [43] – should not be overlooked, even if the risk of using them for evil – even egregious evil – is apparent, such as helping a totalitarian government find and eradicate protestors[44] or even ethnic or religious minorities.[45] So, even though these amazing technologies carry both the power to harm and to help,[46] the expectation that these technologies’ power will continue to increase also means the question of trustworthiness around big data and AI will only become more important.

Can we make trust a part of how digital technologies work? 
What we are really asking is this: how can we build trust into big data and AI? We need to design these digital technologies so that trust is always a part of how they work and not something that has to be earned after the fact or even simply becomes irrelevant, because there are no alternatives to untrustworthy technologies. Because we are still at the very beginning of this technological revolution, we can change what it means to develop and use these technologies. 

This is the kind of solution I think religions, as well as Big Tech and governments, need to consider and implement. After all, these technologies are only going to become more widespread. The dangers they pose to religions can come either from the external possibility of discrimination and/or oppression or from the internal temptation to use them for their own purposes. Religions, tech firms, and governments can develop and intend to use them for good, but the road to hell is paved with good intentions. If trust is not built into the way digital technologies work, there will always be a risk or an assumption of them abusing their newfound powers. Unchecked power has never led to human flourishing for all. Just think of Hilter’s Nazi Germany, Franco’s Spain, Soviet Russia, WWII’s Imperial Japan, or even more recently IS and North Korea, to say nothing of the Medieval Roman Catholic Church’s use of indulgences or the transatlantic slave trade. As they say, “power corrupts, and absolute power corrupts absolutely.” Now, I am not naive. There is no way to eliminate all risk of these technologies being used for evil. However, perhaps we can build a world where that risk is known and the abuse of power less likely, especially a world where the risk to and risk from religions is minimised.

5 ways to build trust into digital technologies
Some of my solutions in the next few articles will be radical, more radical than many would be comfortable with, but some of them at least might be just radical enough to work. But if the issue is trust and by nature is something that people share, then the solution requires sharing. Specifically, I am going to suggest that to build trust into big data and AI, it will need to share 5 things.

  1. Share access
  2. Share development
  3. Share profits
  4. Share accountability
  5. Share dialogue 

These may seem obvious. They may not even seem that original or that radical. But to know how sharing these 5 things will build trust into big data and AI or why I suspect many will find them radical, you will have to keep reading. Until then, I would love to hear in the comments how you might suggest building trust into these technologies.

This article was written by Anthony Buck and reflects his personal analyses and opinions, rather than those of EARS.

To all weekly comments ->

Interested in similar topics? Go to our Dashboard and receive free updates.


[1] Algorithms drive online discrimination, academic warns

[2] Analysis | The Technology 202: Nuns are leading the charge to pressure Amazon to get religion on facial recognition

[3] Perspective | China’s alarming AI surveillance of Muslims should wake us up

[4] Opinion | China’s Orwellian War on Religion

[5] Your body is a passport – POLITICO

[6] Iglesias brasileñas adquieren tecnología de reconocimiento facial para controlar la asistencia y emociones de sus fieles

[7] Kuzzma Inteligência Artificial |

[8] EUROPEAN COMMISSION Brussels, 19.2.2020 COM(2020) 65 final WHITE PAPER On Artificial Intelligence – A European approach to excellence and trust.

[9] EU companies selling surveillance tools to China’s human rights abusers

[10] Sandra Wachter, ‘Affinity Profiling and Discrimination by Association in Online Behavioural Advertising’, Berkeley Technology Law Journal 35.2 (2020 Forthcoming). Accessed 16 Dec 2020. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3388639#

[11] Indonesian app to report ‘misguided’ religion ‘risks dividing neighbors’

[12] India’s founding values are threatened by sinister new forms of oppression | Madhav Khosla

[13] Margaret Mitchell: Google fires AI ethics founder

[14] Google fires second AI ethics researcher following internal investigation

[15] Google fires top AI ethicist

[16] Theme 3: Trust will not grow, but technology usage will continue to rise as a ‘new normal’ sets in

[17] Tomi Dufva, and Mikko Dufva, “Grasping the Future of the Digital Society”, Futures 107 (2019/03/01/ 2019): 17-28. https://dx.doi.org/https://doi.org/10.1016/j.futures.2018.11.001.

[18] HBR – Digital Transformation Is Not About Technology

[19] If Our Future Is Digital, How Will It Change the World?

[20] COVID-19 digital transformation & technology

[21] DIGITAL GLOBALIZATION: THE NEW ERA OF GLOBAL FLOWS

[22] Archbishop urges bishops to back decree compliance | in-cyprus.com

[23] Coronavirus: Religious studies teachers say churches should be opened

[24] Påsken ringes, synges og dyttes ind med drive-in gudstjenester i kirker landet over

[25] Voxpop: Hvordan fejrer jødiske familier i Danmark pesach under coronakrisen?

[26] La messe télévisée est-elle « valide

[27] Kirche und Glaube in Coronazeiten – Der Gottesdienst “fehlt schon sehr”

[28] Gottesdienst mit Auflagen: Kein Gesang, Hostie mit der Zange

[29] Over 1.5m view religious service from Knock online

[30] UK Muslims prepare to take Ramadan online | World news

[31] From matzah deliveries to taking seders online, UK Jews adjust to lockdown

[32] Chief Rabbi: Zoom seders not allowed but tech can ‘enhance Pesach experience’

[33] British public turn to prayer as one in four tune in to religious services

[34] Churches turn to the internet to reach their flocks

[35] Religions show faith in power of technology

[36] Iglesias brasileñas adquieren tecnología de reconocimiento facial para controlar la asistencia y emociones de sus fieles

[37] Kuzzma Inteligência Artificial |

[38] Analysis | The Technology 202: Nuns are leading the charge to pressure Amazon to get religion on facial recognition

[39] EUROPEAN COMMISSION Brussels, 19.2.2020 COM(2020) 65 final WHITE PAPER On Artificial Intelligence – A European approach to excellence and trust.

[40] The Future of Big Data: Five Predictions From Experts for 2020 to 2025

[41] Future of AI according to top AI experts of 2021: In-Depth Guide

[42] Artificial intelligence in healthcare: past, present and future

[43] Use of AI in healthcare & medicine is booming – here’s how the medical field is benefiting from AI in 2021 and beyond

[44] India’s founding values are threatened by sinister new forms of oppression | Madhav Khosla

[45] Perspective | China’s alarming AI surveillance of Muslims should wake us up

[46] https://ec.europa.eu/digital-single-market/en/policies/big-data