Article
AI - Artificial Intelligence
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

1,000th Article
AI - Artificial Intelligence
Creed
Death & life
Digital
6 min read

AI deadbots are no way to cope with grief

The data we leave in the cloud will haunt and deceive those we leave behind.

Graham is the Director of the Centre for Cultural Witness and a former Bishop of Kensington.

A tarnished humaniod robot rests its head to the side, its LED eyes look to the camera.
Nicholas Fuentes on Unsplash.

What happens to all your data when you die? Over the years, like most people, I've produced a huge number of documents, letters, photos, social media posts, recordings of my voice, all of which exist somewhere out there in the cloud (the digital, not the heavenly one). When I die, what will happen to it all? I can't imagine anyone taking the time to climb into my Dropbox folder or Instagram account and delete it all? Does all this stuff remain out there cluttering up cyberspace like defunct satellites orbiting the earth?  

The other day I came across one way it might have a future - the idea of ‘deadbots’. Apparently, AI has now developed to such an extent that it can simulate the personality, speech patterns and thoughts of a deceased person. In centuries past, most people did not leave behind much record of their existence. Maybe a small number of possessions, memories in the minds of those who knew them, perhaps a few letters. Now we leave behind a whole swathe of data about us. AI is now capable of taking all this data and creating a kind of animated avatar, representing the deceased person, known as a ‘deadbot’ or even more weirdly, a ‘griefbot’. 

You can feel the attraction. An organisation called ‘Project December’ promises to ‘simulate the dead’, offering a ghostly video centred around the words ‘it’s been so long: I miss you.’ For someone stricken with grief, wondering whether there's any future in life now that their loved one has gone, feeling the aching space in the double bed, breakfast alone, the silence where conversation once filled the air, the temptation to be able to continue to interact and talk with a version of the deceased might be irresistible. 

There is already a developing ripple of concern about this ‘digital afterlife industry’. A recent article in Aeon explored the ethical dilemmas. Researchers in Cambridge University have already called for the need for safety protocols against the social and psychological damage that such technology might cause. They focus on the potential for unscrupulous marketers to spam surviving family or friends with the message that they really need XXX because ‘it's what Jim would have wanted’. You can imagine the bereaved ending up being effectively haunted by the ‘deadbot’, and unable to deal with grief healthily. It can be hard to resist for those whose grief is all-consuming and persistent. 

Yet it's not just the financial dangers, the possibility of abuse that troubles me. It's the deception involved which seems to me to operate in at a number of ways. And it's theology that helps identify the problems.  

The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a Zoom call in place of a person-to-person encounter. 

An AI-generated representation of a deceased partner might provide an opportunity for conversation, but it can never replicate the person. One of the great heresies of our age (one we got from René Descartes back in the seventeenth century) is the utter dualism between body and soul. It is the idea that we have some kind of inner self, a disembodied soul or mind which exists quite separately from the body. We sometimes talk about bodies as things that we have rather than things that we are. The anthropology taught within the pages of the Bible, however, suggests we are not disembodied souls but embodied persons, so much so that after death, we don't dissipate like ethereal ‘software’ liberated from the ‘hardware’ of the body, but we are to be clothed with new resurrection bodies continuous with, but different from the ones that we possess right now. 

We learned about the importance of our bodies during the COVID pandemic. When we were reduced to communicating via endless Zoom calls, we realised that while they were better than nothing, they could not replicate the reality of face-to-face bodily communication. A Zoom call couldn't pick up the subtle messages of body language. We missed the importance of touch and even the occasional embrace. Our bodies are part of who we are. We are not souls that happen to temporarily inhabit a body, inner selves that are the really important bit of us, with the body an ancillary, malleable thing that we don't ultimately need. The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a virtual meeting in place of a person-to-person encounter. 

Another problem I have with deadbots, is that they fix a person in time, like a fossilised version of the person who once lived. AI can only work with what that person has left behind - the recordings, the documents, the data which they produced while they were alive. And yet a crucial part of being human is the capacity to develop and change. As life continues, we grow, we shift, our priorities change. Hopefully we learn greater wisdom. That is part of the point of conversation, that we learn things, it changes us in interaction with others. There is the possibility of spiritual development of maturity, of redemption. A deadbot cannot do that. It cannot be redeemed, it cannot be transformed, because it is, to quote U2, stuck in a moment, and you can’t get out of it.  

This is all of a piece with a general trajectory in our culture which is to deny the reality of death. For Christians, death is an intruder. Death - or at least the form in which we know it, that of loss, dereliction, sadness - was not part of the original plan. It doesn't belong here, and we long for the day when one day it will be banished for good. You don’t have to be a Christian to feel the pain of grief, but paradoxically it's only when you have a firm sense of hope that death is a defeated enemy, that you can take it seriously as a real enemy. Without that hope, all you can do is minimise it, pretend it doesn't really matter, hold funerals that try to be relentlessly cheerful, denying the inevitable sense of tragedy and loss that they were always meant to express.  

Deadbots are a feeble attempt to try to ignore the deep gulf that lies between us and the dead. In one of his parables, Jesus once depicted a conversation between the living and the dead:  

“between you and us a great chasm has been fixed, so that those who might want to pass from here to you cannot do so, and no one can cross from there to us.”  

Deadbots, like ‘direct cremations’, where the body is disposed without any funeral, denying the bereaved the chance to grieve, like the language around assisted dying that death is ‘nothing at all’ and therefore can be deliberately hastened, are an attempt to bridge that great chasm, which, this side of the resurrection, we cannot do. 

Deadbots in one sense are a testimony to our remarkable powers of invention. Yet they cannot ultimately get around our embodied nature, offer the possibility of redemption, or deal with the grim reality of death. They offer a pale imitation of the source of true hope - the resurrection of the body, the prospect of meeting our loved ones again, yet transformed and fulfilled in the presence of God, even if it means painful yet hopeful patience and waiting until that day. 

Celebrate with us - we're 2!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief