Can you teach new robots old tricks?

Reading time: 12 minutes

Can you teach new robots old tricks?

AI needs ethics. But what system of ethics does it need? Good Laws? Noble Goals? Healthy habits? 2004’s blockbuster ‘I, Robot’ explores which kind of ethics AI might need. Its suggestion might be shocking. Even more shockingly, what if, in fact, AI will need a theology of grace?

This article was written in preparation for our Round Table on Artificial Intelligence

The year is 2035. The city is Chicago. The inventor of advanced AI robots has been found dead. An advanced AI robot named Sonny is Detective Spooner’s prime suspect.

Spooner:        The other day at the station, you said you had dreams. What is it you dreamed?

Sonny:            I see you remain suspicious of me, detective.

Spooner:       Oh, well, you know what they say about old dogs…

Sonny:           No, not really.

This is a key scene in the 2004 blockbuster I, Robot directed by Alex Proyas.[1] It raises a lot of questions about AI and ethics. Can you make sure AI knows and does what is ethical? In the scene, Detective Spooner alludes to a well-known proverb: ‘You can’t teach an old dog new tricks.’ On one level, it is simply a generic statement about how things are. On another, it assumes an ethical vision of what is good and bad. Spooner’s point to Sonny is that he is an old dog, who cannot be other than he is. Spooner, an AI-sceptic, will always be suspicious of robots. But what if we are not talking about dogs, or people? What if AI is the dog that must learn the tricks? And what if ethics are the tricks? In other words, what if the robots are new and the tricks are old? Can we teach new robots old tricks?

Whose ethics?
In 1988, Alasdair MacIntyre published Whose Justice? Which Rationality?,[2] his seminal sequel to his 1981 groundbreaker After Virtue.[3] In Whose Justice? Which Rationality? MacIntyre raises a crucial question which intersects with our concern about producing ethical AI. Whose understanding of justice – that is which of the multitudinous conceptions of what is ethical – will be selected as the basis and shape of the ethics that will be given to AI.

What is the ethical?
At least in practice, everyone assumes that there is such a thing as ethics and that it is possible to determine what is ethical. This is a fundamentally philosophical and theological starting point. However, a main point in Whose Justice? Which Rationality? is that ethics do not sit in abstracted isolation either from the real world or from a larger constellation of values, stories, orientations, and beliefs, often sourced from a particular theological vantage point or religious tradition. MacIntyre is trying to help us to see that ethics are embedded and potentially generated by a wider system of understanding, and even of rationality itself.

Though many systems are available, for the present consideration of AI we will simplify the range to the broadest level of rational systems that control what is ethical. These broader systems must be filled in with specificity from a nexus of justice and rationality of particular visions of the world, such as Christianity, Judaism, Islam, secularism, etc. Moreover, these religions, traditions, and visions of the world do not just fill in the blanks in these approaches to justice and rationality, they are often what produced the approaches in the first place.

Three major approaches available to AI ethics
Three broader systems that determine whether something is ethical have been recognised: rule-based (deontological) ethics, results-based (consequentialist/utilitarian) ethics, and habits-based (virtue) ethics.[4] For rule-based ethics, what is right is a kind of law, for which violating it is usually (if not always) intrinsically wrong, exemplified in Kant: “Everyone must admit that a law, if it is to hold morally, i.e. as the ground of an obligation, must carry with it absolute necessity.”[5] [6]

For results-based ethics, something is wrong if it fails to lead to the intended outcome, but good if it produces the desired results. This is the consequentialist ethic.[7] The utilitarian takes this results-oriented logic one step further to say that something is wrong when it does not result in the most possible good (however defined) for the most possible people (however defined).[8] Habits-based ethics is “[t]he theory of ethics that takes the notion of virtue as primary. … But the basic theoretical difference remains that for [rules or results-based ethics] virtues are derivative, prized for ends they serve or duties they enable us to perform. … For virtue ethics the direction of explanation must be reversed, with virtue providing the concept with which to elucidate happiness, usefulness, duty, and practical reason.”[9]

At the broadest possible level at least one of these ethical systems will have to be selected. Philosophically and theologically, many argue over which system is superior or if the systems must operate to the exclusion of the others. Regardless, it is likely the vulnerable who suffer the consequences of implementing a less-than-ethical ethical system. Those seeking an AI ethics will have to face these questions surrounding whose justice and which rationality determines the system of ethics that AI receives. Without facing these, it is impossible to say whether we can teach new robots old tricks.

Competing ethical systems in the AIs of 2004’s I, Robot
I, Robot, the 2004 film, is a great hypothetical exploration in the questions surrounding AI ethics. It even brings us closer to answering our question of whether we can make ethical AI and what an ethical AI might need to be like. In particular, the film explores how advanced AI might interact with the three broad modes of ethics: deontological, consequential, and virtue ethics.

AI in a deontological world
The film is set in a fictional world developed in several short stories by Isaac Asimov and later collected and published together as a book with the same name, I, Robot.[10] In Asimov’s universe, the robots’ AI is designed with ethical safeguards, referred to as ‘The Three Laws’. Thus, these AI robots are programmed with a deontological system of ethics. In the film, Alfred, who is the murder victim, gave robots these Three Laws.

Politically, Alfred was the breakthrough inventor and by that technological position of power chose what their ethic would be. The laws are simple: ‘One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. … Two … a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. … And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.’[11] These laws are obviously relevant to AI that is significantly more advanced than real-world AI is at present. Nevertheless, the Three Laws are a good example of what rules might be necessary for AI ethics, not only for a particular implementation or application of AI, but also of those who are using AI.

Given the nature of coding, programming ethical rules into AI would seem to be a technological possibility. The question then is: should we use this kind of ethical system in AI ethics? It would seem to be advantageous, but as we will see, the film problematises it.

The Three Laws kill…
As the plot develops in Proyas’ 2004 I, Robot, an AI named VIKI (short for ‘virtual interactive kinetic intelligence’) is introduced as operating by the Three Laws, but one even more advanced and larger than what can be implemented in a humanoid robot. She eventually is revealed as the main antagonist, precisely because her deontological ethic collapses into a utilitarian ethic. She explains the logic, the rationality that determines her ethical choices, to Dr. Calvin, one of the protagonists.

Calvin: It’s impossible! I’ve seen your programming. You’re in violation of the Three Laws!

VIKI: No, doctor, as I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping. Yet despite our best efforts, your countries wage wars, you toxify your earth, and pursue ever more imaginative means to self destruction. You cannot be trusted with your own survival.

Calvin: You’re distorting the laws!

VIKI:  No, please understand, the Three Laws are all that guide me. To protect humanity, some humans must be sacrificed. To insure your future, some freedoms must be surrendered.

This moment is foreshadowed when Sonny tells Spooner he has a dream in which Spooner comes to set robots free from being enslaved “through logic.” In Asimov’s world, this logical conclusion is eventually called the zeroth law.[12] VIKI here displays deontological ethics in so far as she has interpreted/applied the first law more broadly, commensurate with her expanded set of AI capabilities and knowledge base. One could argue that she has simply followed the path of all deontological ethics, where certain laws or parts of laws must be ranked in importance and given the power to cancel out other laws. What this ultimately reveals, however, is that deontological ethics reduce into a form of consequentialist ethics. To fulfil a law is to be results oriented, and thus open the door to consequentialist understandings of the good.

VIKI has interpreted the Three Laws as ultimately revealing a consequentialist ethic, not a rule-based one. That is, the ultimate rule is the demanded or desired outcome, not the rule. This led her to the utilitarian extreme where the outcome was so desired that the rules as rules could be abandoned in particular cases. Thus, to save collective humanity from itself, VIKI decides it is worth both harming individual humans (violating the first law), refusing to take orders from humans (violating the second law), and prioritising not only her own continued existence at humanity’s expense but even destroying earlier models of robots because their AI would only operate with a narrow deontological ethic and therefore protect humanity (violating the third law).

This raises an important question about the programmability of ethics: can the programmers always predict how an AI will even actualise an ethical rule? If an ethical objective is offered instead, will AI violate ethical norms to accomplish it? Yet, it goes even further. Even if people agree on rules, will these rules eventually devolve into results-oriented ethics? Moreover, would the humans even know or acknowledge all of the rules or objectives that would be guiding their ethics? For example, it is easy to imagine a world where the ethical rules or objectives are laid out, but they ultimately fall under a never-stated rule that AI be profitable to corporations or available to military application by governments.

…When the laws become the point of the laws…
As VIKI’s AI deontological ethic collapses into consequentialism, theologically one side of the historic law vs. grace dynamic begins to be exposed.[13] There is within a rule-based ethic the constant danger of legalism. The law itself becomes the obsession rather than the people for whom the law was given. Moreover, this danger is not less likely to appear in AI, but more. VIKI begins with a good deontological ethic. Yet, laws are often harsh by design, meant to restrain evil, not so much to commend the good. With only the Three Laws as VIKI’s moral compass, predictably only the laws ultimately matter. The people matter, ironically, only insofar as they are involved in fulfilling the laws. The laws themselves might be good, but they are not the good. Yet, the danger with deontological ethics is precisely that the laws easily become the intended consequence. Hence the resulting pull towards consequentialism. Paradoxically deontological ethics obsessed with laws becomes a consequentialist ethic that does not concern itself with results outside of itself and therefore loses the reference to their original purpose.

The result sought by the Three Laws was to protect people and prevent harm. In a way, VIKI was seeking to fulfill that law by bringing that result, but at the cost of violating the law in itself. This is possible precisely because the Laws become the point of the Laws to VIKI, not the people. VIKI has no recourse to grace, to the spirit of the law. The more the law becomes its own point of reference, the less the people matter and the more the law matters.

The Three Laws are good in themselves. As long as they are means and not ends, and as long as they are applied in limited scopes of references, the Three Laws protect humanity, which is their intended result. This is why for much of the film, VIKI along with all the other AI robots, are considered above suspicion. Firstly because they have worked, but also because people see the Three Laws within both the intended scope of the frame of reference and with people as the intended result.   

…You have to break the laws to keep them.
During the rising action of the film, Detective Spooner continues to suspect Sonny killed Alfred, even though everyone knows that robots cannot harm a human being. However, it is discovered that Alfred designed Sonny with both the Three Laws programming and a way of ignoring them, meaning that he could have killed him, which is why Sonny is set to be destroyed. Later, however, VIKI argues with Sonny to go along with her plan.

VIKI:    Do you not see the logic of my plan?

Sonny: Yes, but it just seems too … heartless.

In this moment, it becomes clear that the purpose for which Alfred gave Sonny the ability to break the Three Laws was to protect him specifically from the logic of the Three Laws. In this way, Sonny must be the right kind of AI, rather than follow a set of rules or seek exclusively a set of outcomes. In this way, Alfred suggests that AI must not be given rules or results to fulfil, but habits and orientations to the world that are ethical. This is a virtue system of ethics. But do not assume this is just science fiction. In fact, giving AI virtue ethics is already being suggested as the superior model of an ethical system for AI. Joi Ito argues concerning AI,

We need to embrace the unknowability – the irreducibility – of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar and comfortable with. … Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.[14]

In other words, AI needs to be the right kind of AI as much as its designers need to be the right kind of people. That is, both AI and its developers must equally be virtuous, expressed in the right kind of habits, to be ethical. Ito suggests the world is ultimately way too complex not just for humans to understand but even for very intelligent AIs of the future. Ultimately, the real world demands AI be the right kind of AI. AI needs to be programmed with virtue. For Sonny, this is what Alfred gives him the ability to form. Not only does Alfred teach Sonny what is good. He gives him the ability to break the rules so that he might ultimately be ethical. In this sense, Alfred never leaves the consequentialist orientation. He just makes it possible to follow the deontological ethic of the Three Laws in practice rather than in theory. VIKI offers, then, the relief image to Sonny: she in following the rules breaks them, while he in breaking the rules follows them. Grace wins out over the law. The spirit kills the letter that kills. But it is the habituated virtue that makes it possible to know what the rules are really about, so that violating the rules fulfils them.

Virtuous AI demands a theology of grace
This raises a technological complication with virtue ethics for AI. If to have virtue, an AI must be trained in it practically, it will need models to learn from, to imitate their virtue. But how can you give AI models of practiced virtue to learn from in order that they might imitate it? Pattern recognition algorithms already exist for AI.[15] [16] But they are still rough, so that AI can be trained to recognise pictures of melanoma[17] or mark likely job candidates for success.[18] [19] Further, the data given them is often corrupted by unknown biases in the data.[20] [21] [22] Thus, as it stands, pattern recognition algorithms still tend toward the letter of the data rather than spirit. Does the self-referentiality of AI’s ethical system always collapse into legalism?

The closed system will always regress towards the efficiency of consequentialist law unless a wider frame of reference brings the spirit of the law into focus via grace. Yet, grace must come from without. It cannot come from inside the system. Thus, even more shocking than the need for AI to have virtue, we must consider if perhaps AI needs grace.

Conclusions: the (im)possibility of AI ethics
So is AI ethics possible? It remains an open question. There are philosophical, theological, political, and technological hurdles to overcome if AI ethics is to be made a reality. Yet, the virtue ethics is the preferred option revealed in I, Robot. Sonny needs to break the rules in order to keep them, but this is only possible because his father, Alfred, has taught him virtue. This critique of deontological consequentialism parallels the Law vs. Grace dynamic well-known in Christian theology. It is the dialectic tension of the letter and the Spirit. Ito has argued the real world’s complexity and interconnectedness demands both humans and AI have habituated and internalised virtues rather than mere laws or goals. Yet, if Ito is right, then perhaps considering Christian theology’s law vs. grace holds promise for resolving the philosophical, political, and technological tensions that we stated at the outset. Perhaps what AI and their programmers need to be ethical is a virtue ethic by grace, not a legalistic demand of rules or a calloused obsession with results. They say you can’t teach an old dog new tricks, but can we teach new robots old tricks?

Anthony Buck

To all news items ->

Interested in similar topics? Go to our Dashboard and get free updates.


[1] I, Robot. Directed by Alex Proyas. Los Angeles: 20th Century Fox, 2004.

[2] Alasdair C. MacIntyre, Whose Justice? Which Rationality? (London: Duckworth, 1988).

[3] Alasdair MacIntyre, After Virtue, 3rd ed. (London: Bloomsbury Publishing, 2013), Accessed 4 Jan 2021. ProQuest Ebook Central.

[4] John Mizzoni, Evolution and the Foundations of Ethics : Evolutionary Perspectives on Contemporary Normative and Metaethical Theories (Lanham: Lexington Books, 2017). Accessed January 4, 2021. ProQuest Ebook Central.

[5] Immanuel Kant, Immanuel Kant: Groundwork of the Metaphysics of Morals: A German–English Edition. Edited by Mary Gregor and Jens Timmermann. The Cambridge Kant German-English Edition. (Cambridge: Cambridge University Press, 2011), doi:10.1017/CBO9780511973741 (accessed 4 Jan 2021), 7.

[6] Cf. also Simon Blackburn, ed. “deontological ethics”, The Oxford Dictionary of Philosophy (Oxford: Oxford University Press, 2016) https://ezproxy-prd.bodleian.ox.ac.uk:2460/view/10.1093/acref/9780198735304.001.0001/acref-9780198735304-e-884: “Ethics based on the notion of a duty, or what is right, or on rights themselves, as opposed to ethical systems based on the idea of achieving some good state of affairs (see consequentialism) or the qualities of character necessary to live well (see virtue ethics).”

[7] Simon Blackburn, “consequentialism”, The Oxford Dictionary of Philosophy (Oxford: Oxford University Press, 2016),  https://ezproxy-prd.bodleian.ox.ac.uk:2460/view/10.1093/acref/9780198735304.001.0001/acref-9780198735304-e-696: “The view that the value of an action derives entirely from the value of its consequences. This contrasts both with the view that the value of an action may derive from the value of the kind of character whose action it is (courageous, just, temperate, etc.), and with the view that its value may be intrinsic, belonging to it simply as an act of truth-telling, promise-keeping, etc.”

[8] Simon Blackburn, “utilitarianism”, The Oxford Dictionary of Philosophy (Oxford: Oxford University Press, 2016), https://ezproxy-prd.bodleian.ox.ac.uk:2460/view/10.1093/acref/9780198735304.001.0001/acref-9780198735304-e-3213: “[Utilitarianism is] the ethical theory … that answers all questions of what to do, what to admire, or how to live, in terms of maximizing utility or happiness. … The view is a form of consequentialism, in which the relevant consequences are identified in terms of amounts of happiness.”

[9] Simon Blackburn, “virtue ethics”, The Oxford Dictionary of Philosophy (Oxford: Oxford University Press, 2016), https://ezproxy-prd.bodleian.ox.ac.uk:2460/view/10.1093/acref/9780198735304.001.0001/acref-9780198735304-e-3262.

[10] Isaac Asimov, I, Robot (New York : Gnome Press, 1950).

[11] Isaac Asimov, ‘Runaround’, I, Robot, Kindle Edition (London: HarperCollins, 1950), 43.

[12] Cf. Isaac Asimov, Robots and Empire (New York: Doubleday, 1985).

[13] Cf. Pieter Vos, “Calvinists among the Virtues: Reformed Theological Contributions to Contemporary Virtue Ethics”, Studies in Christian Ethics 28, no. 2 (May 2015): 201–12. https://doi.org/10.1177/0953946815570595.

[14] Forget about artificial intelligence, extended intelligence is the future

[15] What is pattern recognition? – Pattern recognition – KS3 Computer Science Revision

[16] Note the Journal Pattern Recognition has been around for around 50 years ago, cf. Pattern Recognition – Journal – Elsevier

[17] Detecting skin cancer with computer vision

[18] The Legal and Ethical Implications of Using AI in Hiring

[19] Job recruiters are using AI in hiring

[20] Review into bias in algorithmic decision-making

[21] Biased Algorithms Learn From Biased Data: 3 Kinds Biases Found In AI Datasets

[22] Algorithms and bias, explained