1,000th Article
AI
Creed
Death & life
Digital
6 min read

AI deadbots are no way to cope with grief

The data we leave in the cloud will haunt and deceive those we leave behind.

Graham is the Director of the Centre for Cultural Witness and a former Bishop of Kensington.

A tarnished humaniod robot rests its head to the side, its LED eyes look to the camera.
Nicholas Fuentes on Unsplash.

What happens to all your data when you die? Over the years, like most people, I've produced a huge number of documents, letters, photos, social media posts, recordings of my voice, all of which exist somewhere out there in the cloud (the digital, not the heavenly one). When I die, what will happen to it all? I can't imagine anyone taking the time to climb into my Dropbox folder or Instagram account and delete it all? Does all this stuff remain out there cluttering up cyberspace like defunct satellites orbiting the earth?  

The other day I came across one way it might have a future - the idea of ‘deadbots’. Apparently, AI has now developed to such an extent that it can simulate the personality, speech patterns and thoughts of a deceased person. In centuries past, most people did not leave behind much record of their existence. Maybe a small number of possessions, memories in the minds of those who knew them, perhaps a few letters. Now we leave behind a whole swathe of data about us. AI is now capable of taking all this data and creating a kind of animated avatar, representing the deceased person, known as a ‘deadbot’ or even more weirdly, a ‘griefbot’. 

You can feel the attraction. An organisation called ‘Project December’ promises to ‘simulate the dead’, offering a ghostly video centred around the words ‘it’s been so long: I miss you.’ For someone stricken with grief, wondering whether there's any future in life now that their loved one has gone, feeling the aching space in the double bed, breakfast alone, the silence where conversation once filled the air, the temptation to be able to continue to interact and talk with a version of the deceased might be irresistible. 

There is already a developing ripple of concern about this ‘digital afterlife industry’. A recent article in Aeon explored the ethical dilemmas. Researchers in Cambridge University have already called for the need for safety protocols against the social and psychological damage that such technology might cause. They focus on the potential for unscrupulous marketers to spam surviving family or friends with the message that they really need XXX because ‘it's what Jim would have wanted’. You can imagine the bereaved ending up being effectively haunted by the ‘deadbot’, and unable to deal with grief healthily. It can be hard to resist for those whose grief is all-consuming and persistent. 

Yet it's not just the financial dangers, the possibility of abuse that troubles me. It's the deception involved which seems to me to operate in at a number of ways. And it's theology that helps identify the problems.  

The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a Zoom call in place of a person-to-person encounter. 

An AI-generated representation of a deceased partner might provide an opportunity for conversation, but it can never replicate the person. One of the great heresies of our age (one we got from René Descartes back in the seventeenth century) is the utter dualism between body and soul. It is the idea that we have some kind of inner self, a disembodied soul or mind which exists quite separately from the body. We sometimes talk about bodies as things that we have rather than things that we are. The anthropology taught within the pages of the Bible, however, suggests we are not disembodied souls but embodied persons, so much so that after death, we don't dissipate like ethereal ‘software’ liberated from the ‘hardware’ of the body, but we are to be clothed with new resurrection bodies continuous with, but different from the ones that we possess right now. 

We learned about the importance of our bodies during the COVID pandemic. When we were reduced to communicating via endless Zoom calls, we realised that while they were better than nothing, they could not replicate the reality of face-to-face bodily communication. A Zoom call couldn't pick up the subtle messages of body language. We missed the importance of touch and even the occasional embrace. Our bodies are part of who we are. We are not souls that happen to temporarily inhabit a body, inner selves that are the really important bit of us, with the body an ancillary, malleable thing that we don't ultimately need. The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a virtual meeting in place of a person-to-person encounter. 

Another problem I have with deadbots, is that they fix a person in time, like a fossilised version of the person who once lived. AI can only work with what that person has left behind - the recordings, the documents, the data which they produced while they were alive. And yet a crucial part of being human is the capacity to develop and change. As life continues, we grow, we shift, our priorities change. Hopefully we learn greater wisdom. That is part of the point of conversation, that we learn things, it changes us in interaction with others. There is the possibility of spiritual development of maturity, of redemption. A deadbot cannot do that. It cannot be redeemed, it cannot be transformed, because it is, to quote U2, stuck in a moment, and you can’t get out of it.  

This is all of a piece with a general trajectory in our culture which is to deny the reality of death. For Christians, death is an intruder. Death - or at least the form in which we know it, that of loss, dereliction, sadness - was not part of the original plan. It doesn't belong here, and we long for the day when one day it will be banished for good. You don’t have to be a Christian to feel the pain of grief, but paradoxically it's only when you have a firm sense of hope that death is a defeated enemy, that you can take it seriously as a real enemy. Without that hope, all you can do is minimise it, pretend it doesn't really matter, hold funerals that try to be relentlessly cheerful, denying the inevitable sense of tragedy and loss that they were always meant to express.  

Deadbots are a feeble attempt to try to ignore the deep gulf that lies between us and the dead. In one of his parables, Jesus once depicted a conversation between the living and the dead:  

“between you and us a great chasm has been fixed, so that those who might want to pass from here to you cannot do so, and no one can cross from there to us.”  

Deadbots, like ‘direct cremations’, where the body is disposed without any funeral, denying the bereaved the chance to grieve, like the language around assisted dying that death is ‘nothing at all’ and therefore can be deliberately hastened, are an attempt to bridge that great chasm, which, this side of the resurrection, we cannot do. 

Deadbots in one sense are a testimony to our remarkable powers of invention. Yet they cannot ultimately get around our embodied nature, offer the possibility of redemption, or deal with the grim reality of death. They offer a pale imitation of the source of true hope - the resurrection of the body, the prospect of meeting our loved ones again, yet transformed and fulfilled in the presence of God, even if it means painful yet hopeful patience and waiting until that day. 

Celebrate with us - we're 2!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief

Explainer
AI
Belief
Creed
5 min read

Whether it's AI or us, it's OK to be ignorant

Our search for answers begins by recognising that we don’t have them.

Simon Walters is Curate at Holy Trinity Huddersfield.

A street sticker displays multiple lines reading 'and then?'
Stephen Harlan on Unsplash.

When was the last time you admitted you didn’t know something? I don’t say it as much as I ought to. I’ve certainly felt the consequences of admitting ignorance – of being ridiculed for being entirely unaware of a pop culture reference, of being found out that I wasn’t paying as close attention to what my partner was saying as she expected. In a hyper-connected age when the wealth of human knowledge is at our fingertips, ignorance can hardly be viewed as a virtue. 

A recent study on the development of artificial intelligence holds out more hope for the value of admitting our ignorance than we might have previously imagined. Despite wide-spread hype and fearmongering about the perils of AI, our current models are in many ways developed in similar ways to how an animal is trained. An AI system such as ChatGPT might have access to unimaginable amounts of information, but it requires training by humans on what information is valuable or not, whether it has appropriately understood the request it has received, and whether its answer is correct. The idea is that human feedback helps the AI to hone its model through positive feedback for correct answers, and negative feedback for incorrect answers, so that it keeps whatever method led to positive feedback and changes whatever method led to negative feedback. It really isn’t that far away from how animals are trained. 

However, a problem has emerged. AI systems have become adept at giving coherent and convincing sounding answers that are entirely incorrect. How has this happened? 

This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. 

In digging into the training method for AI, the researchers found that the humans training the AI flagged answers of “I don’t know” as unsatisfactory. On one level this makes sense. The whole purpose of these systems is to provide answers, after all. But rather than causing the AI to return and rethink its data, it instead developed increasingly convincing answers that were not true whatsoever, to the point where the human supervisors didn’t flag sufficiently convincing answers as wrong because they themselves didn’t realise that they were wrong. The result is that “the more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.” 

Uncovering some of what is going on in AI systems dispels both the fervent hype that artificial intelligence might be our saviour, and the deep fear that it might be our societal downfall. This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. Whether it is used for good or ill depends on the approach of the humans that use it. 

But this study also uncovers our strained relationship with ignorance. Problems arise in the answers given by systems like ChatGPT because a convincing answer is valued more than admitting ignorance, even if the convincing answer is not at all correct. Because the AI has been trained to avoid admitting it doesn’t know something, all of its answers are less reliable, even the ones that are actually correct.  

This is not a problem limited to artificial intelligence. I had a friend who seemed incapable of admitting that he didn’t know something, and whenever he was corrected by someone else, he would make it sound like his first answer was actually the correct one, rather than whatever he had said. I don’t know how aware he was that he did this, but the result was that I didn’t particularly trust whatever he said to be correct. Paradoxically, had he admitted his ignorance more readily, I would have believed him to be less ignorant. 

It is strange that admitting ignorance is so avoided. After all, it is in many ways our default state. No one faults a baby or a child for not knowing things. If anything, we expect ignorance to be a fuel for curiosity. Our search for answers begins in the recognition that we don’t have them. And in an age where approximately 500 hours of video is uploaded to YouTube every minute, the sum of what we don’t know must by necessity be vastly greater than all that we do know. What any one of us can know is only a small fraction of all there is to know. 

Crucially, admitting we do not know everything is not the same as saying that we do not know anything

One of the gifts of Christian theology is an ability to recognize what it is that makes us human. One of these things is the fact that any created thing is, by definition, limited. God alone is the only one who can be described by the ‘omnis’. He is omnipotent, omnipresent, and omniscient. There is no limit to his power, and presence, and knowledge. The distinction between creator and creation means that created things have limits to their power, presence, and knowledge. We cannot do whatever we want. We cannot be everywhere at the same time. And we cannot know everything there is to be known.  

Projecting infinite knowledge is essentially claiming to be God. Admitting our ignorance is therefore merely recognizing our nature as created beings, acknowledging to one another that we are not God and therefore cannot know everything. But, crucially, admitting we do not know everything is not the same as saying that we do not know anything. Our God-given nature is one of discovery and learning. I sometimes like to imagine God’s delight in our discovery of some previously unknown facet of his creation, as he gets to share with us in all that he has made. Perhaps what really matters is what we do with our ignorance. Will we simply remain satisfied not to know, or will it turn us outwards to delight in the new things that lie behind every corner? 

For the developers of ChatGPT and the like, there is also a reminder here that we ought not to expect AI to take on the attributes of God. AI used well in the hands of humans may yet do extraordinary things for us, but it will not truly be able to do anything, be everywhere, or know everything. Perhaps if it was trained to say ‘I don’t know’ a little more, we might all learn a little more about the nature of the world God has made.