How to build trust into big data and AI
You can’t put the sugar in a cake after it’s baked. What if the same is true to have ethical and trustworthy big data and AI technologies? What if we have to bake trust into them from the start? Anthony Buck’s third weekly comment in his series on trust in a world of big data and AI explores these questions.
Can you rebuild trust after it has been broken? Can you have trust if only one side has all the power, and even seems to prefer you not know exactly how much or what kind of power they have? These are the questions that have arisen in light of the new age of digital technologies including big data and AI.
We have already asked ‘Can religions trust Big Tech and governments with Big Data and AI?’ and ‘Can religions trust themselves with big data and AI?’ And we have seen that the powerful digital technologies of big data and AI are terrifying in their scope and in their potential for misuse, whether they are in the hands of Big Tech (e.g. Google, Facebook, Amazon, etc.),  governments,   or even religions.  But not only is there an ongoing risk of abuse; there has already been some concerning and unethical behaviour.      
Ultimately, this brings us back to a question of trust. But the question is not how Big Tech, governments, and religious organisations can make sure people still feel like they can be trusted. The question that we should really be asking is this: how can anyone, any group, any company, any government, any religious institution actually be and stay genuinely trustworthy with the immense powers of big data and AI? The fact that many already assume untrustworthiness only highlights how important this second question is.
Is trustworthiness even relevant?
Why would people use technology they do not trust? On the one hand, because they have little choice. The modern world is built around digital technologies,   and only more so since the pandemic has pushed businesses and churches alike towards digitising how they run and what they offer.                On the other hand, because some of these technologies are in use in non-digital spaces. Kuzzma, which sells facial recognition and AI analytics to churches, employs their technology in the physical venues. Most people who go to church know God is watching – even other people – but many would be surprised to learn so was AI tracking software. Likewise, a group of nuns protested Amazon’s sale of its facial tracking technology to the US government. In effect, regardless of trust, people have little choice on whether they use these technologies, but even less on whether some organisation uses these technologies on them.
Moreover, big data and AI are not going anywhere.  They will likely only get more powerful with time. The potential to use them for good – maybe even great good, like detecting and treating cancer   – should not be overlooked, even if the risk of using them for evil – even egregious evil – is apparent, such as helping a totalitarian government find and eradicate protestors or even ethnic or religious minorities. So, even though these amazing technologies carry both the power to harm and to help, the expectation that these technologies’ power will continue to increase also means the question of trustworthiness around big data and AI will only become more important.
Can we make trust a part of how digital technologies work?
What we are really asking is this: how can we build trust into big data and AI? We need to design these digital technologies so that trust is always a part of how they work and not something that has to be earned after the fact or even simply becomes irrelevant, because there are no alternatives to untrustworthy technologies. Because we are still at the very beginning of this technological revolution, we can change what it means to develop and use these technologies.
This is the kind of solution I think religions, as well as Big Tech and governments, need to consider and implement. After all, these technologies are only going to become more widespread. The dangers they pose to religions can come either from the external possibility of discrimination and/or oppression or from the internal temptation to use them for their own purposes. Religions, tech firms, and governments can develop and intend to use them for good, but the road to hell is paved with good intentions. If trust is not built into the way digital technologies work, there will always be a risk or an assumption of them abusing their newfound powers. Unchecked power has never led to human flourishing for all. Just think of Hilter’s Nazi Germany, Franco’s Spain, Soviet Russia, WWII’s Imperial Japan, or even more recently IS and North Korea, to say nothing of the Medieval Roman Catholic Church’s use of indulgences or the transatlantic slave trade. As they say, “power corrupts, and absolute power corrupts absolutely.” Now, I am not naive. There is no way to eliminate all risk of these technologies being used for evil. However, perhaps we can build a world where that risk is known and the abuse of power less likely, especially a world where the risk to and risk from religions is minimised.
5 ways to build trust into digital technologies
Some of my solutions in the next few articles will be radical, more radical than many would be comfortable with, but some of them at least might be just radical enough to work. But if the issue is trust and by nature is something that people share, then the solution requires sharing. Specifically, I am going to suggest that to build trust into big data and AI, it will need to share 5 things.
- Share access
- Share development
- Share profits
- Share accountability
- Share dialogue
These may seem obvious. They may not even seem that original or that radical. But to know how sharing these 5 things will build trust into big data and AI or why I suspect many will find them radical, you will have to keep reading. Until then, I would love to hear in the comments how you might suggest building trust into these technologies.
This article was written by Anthony Buck and reflects his personal analysis and opinion, rather than those of EARS.
Interested in similar topics? Go to our Dashboard and receive free updates.
 Sandra Wachter, ‘Affinity Profiling and Discrimination by Association in Online Behavioural Advertising’, Berkeley Technology Law Journal 35.2 (2020 Forthcoming). Accessed 16 Dec 2020. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3388639#
 Tomi Dufva, and Mikko Dufva, “Grasping the Future of the Digital Society”, Futures 107 (2019/03/01/ 2019): 17-28. https://dx.doi.org/https://doi.org/10.1016/j.futures.2018.11.001.