Article
AI - Artificial Intelligence
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

Article
Books
Culture
Morality
5 min read

Never Let Me Go: 20 years on

Ishiguro’s brilliant novel is the perfect Frankenstein story for today.

Beatrice writes on literature, religion, the arts, and the family. Her published work can be found here

Four young people peer through a window.
Carey Mulligan and Keira Knightley in the 2010 film adaption.
Fox Searchlight Films.

This article contains spoilers. 

Human beings are creative. For good or for evil, making new things out of raw materials is something that we can’t help doing, whether that’s writing new books, creating new recipes, or building new houses. Why are we born this way? Christians would say it’s because of the imago Dei: because according to the book of Genesis, the first book in the Bible, we are made in the image of God. If God created the world and every one of us, and if we’re made in his image, then it follows that all of us have this creative impulse within us, too.  

But if creating is something natural to us, does it follow that it’s also core to our identity as human beings? In other words, is making something that we do, or something that we are? Are we different from all other living creatures in this world by being creators ourselves?  

Although he doesn’t call himself a Christian, these are precisely the kind of theological questions the novelist Kazuo Ishiguro asks time and time again in his books. And nowhere does he ask them more powerfully than in Never Let Me Go, which was published 20 years ago. 

Never Let Me Go starts off as the story of three children at a boarding school. Kathy, one of three friends, serves as our first-person narrator; it’s through her eyes that we slowly realise something sinister is taking place. As Kathy, Tommy, and Ruth grow into teenagers and then young adults, it’s finally revealed that they are clones, brought into being thanks to advancements in cloning technology in a dystopian post-World War II Britain. They are brought up for the sole purpose of being organ donors. Or, to put it more bluntly, they’ve been raised for slaughter.  

Kathy, Ruth, and Tommy have a happy childhood at their boarding school, Hailsham. Their future is hinted at by their teachers, but they’re largely shielded from the truth. All around the country, we later find out, clone children are being raised in horrific conditions. But Hailsham is different, because its Headteacher, Miss Emily, is part of a group that believes the clones deserve to be treated humanely – at least until someone needs a kidney transplant.  

But, though treated in a ‘humane’ way, society doesn’t see the Hailsham clones as ‘human’, and that’s precisely what Miss Emily is trying to prove: that they are not unlike real, normal people. So, she encourages the children to make art. ‘A lot of the time’, Kathy tells us, ‘how you were regarded at Hailsham, how much you were liked and respected, had to do with how good you were at “creating”’. The children don’t understand why they must always paint and draw, but they’re told that Madame Marie-Claude, a mysterious figure, will collect their best artworks for a seemingly important ‘gallery’.  

Years later, Tommy and Kathy have become a couple. Before dying – or ‘completing’, as they call it – after her second ‘donation’, Ruth tells them that she believes a deferral is possible for couples that are truly in love. Kathy and Tommy go to Miss Emily’s house, their former Headmistress, certain that, as children, they were encouraged to produce art precisely to be able to prove, one day, their true feelings.  

They are quickly disappointed. Miss Emily reveals that Hailsham has now closed down, but that while the school stood, it was meant as an experiment, aimed at convincing the public to improve living conditions for the clones: 

‘We took away your art because we thought it would reveal your souls. Or to put it more finely, we did it to prove you had souls at all…we demonstrated to the world that if students were reared in humane, cultivated environments, it was possible for them to grow to be as sensitive and intelligent as any human being.’ 

Equating creativity with human identity does make sense, to an extent at least. In The Mind of the Maker (1941), Christian novelist and critic Dorothy L. Sayers argued that the closest we can get to understanding God as our Creator is through engaging ourselves in creative acts: ‘the experience of the creative imagination in the common man or woman and in the artist is the only thing we have to go upon in entertaining and formulating the concept of creation’. In creative acts, from a Christian perspective, we partially grasp God’s creation of us.  

Ultimately, however, being creative in imitation of God, is not enough to get to the very core of what defines a human being. There are all kinds of factors, from old age to mental or physical disability, that make any form of traditionally creative act highly unlikely for some people. By that definition, someone in a coma or a newborn baby is not fully human. 

That’s exactly the definition of humanity that underpins the cruel society of Ishiguro’s Never Let Me Go. We need a better definition, and Christianity provides a unique tradition to help us on the way. A Christian concept of the human person is one that looks both at why we were made, and what we were made for. Christians believe that God made us out of love, and for the purpose of being in communion with him. He made each one of us as a special and irreplaceable individual, and for each of us our telos – the end or aim of our life – is to join him in heaven.  

If we embrace this definition of what it means to be human, then the extent to which we are able to express our intelligence or creativity while on earth doesn’t really matter anymore. If we believe that merely to exist is good – not to exist and fulfil our potential through this or that accomplishment, but just to exist – then we can’t deny that each member of the human family is, in fact, a ‘person’ in the fullest sense of the word.  

It is precisely this God-shaped hole that makes the concept of human dignity so fragile and slippery in Never Let Me Go. Ishiguro’s brilliant novel is, ultimately, the perfect Frankenstein story for the modern day. It warns us about the consequences of what might happen if we try to treat other human beings as things we have paid, but even more powerfully it shows us the danger of valuing human life for its creativity, instead of loving it as the creation of God. 

Celebrate our 2nd birthday!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.
If you enjoy Seen & Unseen, would you consider making a gift towards our work?
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.
Graham Tomlin
Editor-in-Chief