1,000th Article
AI
Creed
Death & life
Digital
6 min read

AI deadbots are no way to cope with grief

The data we leave in the cloud will haunt and deceive those we leave behind.

Graham is the Director of the Centre for Cultural Witness and a former Bishop of Kensington.

A tarnished humaniod robot rests its head to the side, its LED eyes look to the camera.
Nicholas Fuentes on Unsplash.

What happens to all your data when you die? Over the years, like most people, I've produced a huge number of documents, letters, photos, social media posts, recordings of my voice, all of which exist somewhere out there in the cloud (the digital, not the heavenly one). When I die, what will happen to it all? I can't imagine anyone taking the time to climb into my Dropbox folder or Instagram account and delete it all? Does all this stuff remain out there cluttering up cyberspace like defunct satellites orbiting the earth?  

The other day I came across one way it might have a future - the idea of ‘deadbots’. Apparently, AI has now developed to such an extent that it can simulate the personality, speech patterns and thoughts of a deceased person. In centuries past, most people did not leave behind much record of their existence. Maybe a small number of possessions, memories in the minds of those who knew them, perhaps a few letters. Now we leave behind a whole swathe of data about us. AI is now capable of taking all this data and creating a kind of animated avatar, representing the deceased person, known as a ‘deadbot’ or even more weirdly, a ‘griefbot’. 

You can feel the attraction. An organisation called ‘Project December’ promises to ‘simulate the dead’, offering a ghostly video centred around the words ‘it’s been so long: I miss you.’ For someone stricken with grief, wondering whether there's any future in life now that their loved one has gone, feeling the aching space in the double bed, breakfast alone, the silence where conversation once filled the air, the temptation to be able to continue to interact and talk with a version of the deceased might be irresistible. 

There is already a developing ripple of concern about this ‘digital afterlife industry’. A recent article in Aeon explored the ethical dilemmas. Researchers in Cambridge University have already called for the need for safety protocols against the social and psychological damage that such technology might cause. They focus on the potential for unscrupulous marketers to spam surviving family or friends with the message that they really need XXX because ‘it's what Jim would have wanted’. You can imagine the bereaved ending up being effectively haunted by the ‘deadbot’, and unable to deal with grief healthily. It can be hard to resist for those whose grief is all-consuming and persistent. 

Yet it's not just the financial dangers, the possibility of abuse that troubles me. It's the deception involved which seems to me to operate in at a number of ways. And it's theology that helps identify the problems.  

The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a Zoom call in place of a person-to-person encounter. 

An AI-generated representation of a deceased partner might provide an opportunity for conversation, but it can never replicate the person. One of the great heresies of our age (one we got from René Descartes back in the seventeenth century) is the utter dualism between body and soul. It is the idea that we have some kind of inner self, a disembodied soul or mind which exists quite separately from the body. We sometimes talk about bodies as things that we have rather than things that we are. The anthropology taught within the pages of the Bible, however, suggests we are not disembodied souls but embodied persons, so much so that after death, we don't dissipate like ethereal ‘software’ liberated from the ‘hardware’ of the body, but we are to be clothed with new resurrection bodies continuous with, but different from the ones that we possess right now. 

We learned about the importance of our bodies during the COVID pandemic. When we were reduced to communicating via endless Zoom calls, we realised that while they were better than nothing, they could not replicate the reality of face-to-face bodily communication. A Zoom call couldn't pick up the subtle messages of body language. We missed the importance of touch and even the occasional embrace. Our bodies are part of who we are. We are not souls that happen to temporarily inhabit a body, inner selves that are the really important bit of us, with the body an ancillary, malleable thing that we don't ultimately need. The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a virtual meeting in place of a person-to-person encounter. 

Another problem I have with deadbots, is that they fix a person in time, like a fossilised version of the person who once lived. AI can only work with what that person has left behind - the recordings, the documents, the data which they produced while they were alive. And yet a crucial part of being human is the capacity to develop and change. As life continues, we grow, we shift, our priorities change. Hopefully we learn greater wisdom. That is part of the point of conversation, that we learn things, it changes us in interaction with others. There is the possibility of spiritual development of maturity, of redemption. A deadbot cannot do that. It cannot be redeemed, it cannot be transformed, because it is, to quote U2, stuck in a moment, and you can’t get out of it.  

This is all of a piece with a general trajectory in our culture which is to deny the reality of death. For Christians, death is an intruder. Death - or at least the form in which we know it, that of loss, dereliction, sadness - was not part of the original plan. It doesn't belong here, and we long for the day when one day it will be banished for good. You don’t have to be a Christian to feel the pain of grief, but paradoxically it's only when you have a firm sense of hope that death is a defeated enemy, that you can take it seriously as a real enemy. Without that hope, all you can do is minimise it, pretend it doesn't really matter, hold funerals that try to be relentlessly cheerful, denying the inevitable sense of tragedy and loss that they were always meant to express.  

Deadbots are a feeble attempt to try to ignore the deep gulf that lies between us and the dead. In one of his parables, Jesus once depicted a conversation between the living and the dead:  

“between you and us a great chasm has been fixed, so that those who might want to pass from here to you cannot do so, and no one can cross from there to us.”  

Deadbots, like ‘direct cremations’, where the body is disposed without any funeral, denying the bereaved the chance to grieve, like the language around assisted dying that death is ‘nothing at all’ and therefore can be deliberately hastened, are an attempt to bridge that great chasm, which, this side of the resurrection, we cannot do. 

Deadbots in one sense are a testimony to our remarkable powers of invention. Yet they cannot ultimately get around our embodied nature, offer the possibility of redemption, or deal with the grim reality of death. They offer a pale imitation of the source of true hope - the resurrection of the body, the prospect of meeting our loved ones again, yet transformed and fulfilled in the presence of God, even if it means painful yet hopeful patience and waiting until that day. 

Celebrate with us - we're 2!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief

Article
AI
Culture
Generosity
Psychology
Virtues
5 min read

AI will never codify the unruly instructions that make us human

The many exceptions to the rules are what make us human.
A desperate man wearing 18th century clothes holds candlesticks
Jean Valjean and the candlesticks, in Les Misérables.

On average, students with surnames beginning in the letters A-E get higher grades than those who come later in the alphabet. Good looking people get more favourable divorce settlements through the courts, and higher payouts for damages. Tall people are more likely to get promoted than their shorter colleagues, and judges give out harsher sentences just before lunch. It is clear that human judgement is problematically biased – sometimes with significant consequences. 

But imagine you were on the receiving end of such treatment, and wanted to appeal your overly harsh sentence, your unfair court settlement or your punitive essay grade: is Artificial Intelligence the answer? Is AI intelligent enough to review the evidence, consider the rules, ignore human vagaries, and issue an impartial, more sophisticated outcome?  

In many cases, the short answer is yes. Conveniently, AI can review 50 CVs, conduct 50 “chatbot” style interviews, and identify which candidates best fit the criteria for promotion. But is the short and convenient answer always what we want? In their recent publication, As If Human: Ethics and Artificial Intelligence, Nigel Shadbolt and Roger Hampson discuss research which shows that, if wrongly condemned to be shot by a military court but given one last appeal, most people would prefer to appeal in person to a human judge than have the facts of their case reviewed by an AI computer. Likewise, terminally ill patients indicate a preference for doctor’s opinions over computer calculations on when to withdraw life sustaining treatment, even though a computer has a higher predictive power to judge when someone’s life might be coming to an end. This preference may seem counterintuitive, but apparently the cold impartiality—and at times, the impenetrability—of machine logic might work for promotions, but fails to satisfy the desire for human dignity when it comes to matters of life and death.  

In addition, Shadbolt and Hampson make the point that AI is actually much less intelligent than many of us tend to think. An AI machine can be instructed to apply certain rules to decision making and can apply those rules even in quite complex situations, but the determination of those rules can only happen in one of two ways: either the rules must be invented or predetermined by whoever programmes the machine, or the rules must be observable to a “Large Language Model” AI when it scrapes the internet to observe common and typical aspects of human behaviour.  

The former option, deciding the rules in advance, is by no means straightforward. Humans abide by a complex web of intersecting ethical codes, often slipping seamlessly between utilitarianism (what achieves the most amount of good for the most amount of people?) virtue ethics (what makes me a good person?) and theological or deontological ideas (what does God or wider society expect me to do?) This complexity, as Shadbolt and Hampson observe, means that: 

“Contemporary intellectual discourse has not even the beginnings of an agreed universal basis for notions of good and evil, or right and wrong.”  

The solution might be option two – to ask AI to do a data scrape of human behaviour and use its superior processing power to determine if there actually is some sort of universal basis to our ethical codes, perhaps one that humanity hasn’t noticed yet. For example, you might instruct a large language model AI to find 1,000,000 instances of a particular pro-social act, such as generous giving, and from that to determine a universal set of rules for what counts as generosity. This is an experiment that has not yet been done, probably because it is unlikely to yield satisfactory results. After all, what is real generosity? Isn’t the truly generous person one who makes a generous gesture even when it is not socially appropriate to do so? The rule of real generosity is that it breaks the rules.  

Generosity is not the only human virtue which defies being codified – mercy falls at exactly the same hurdle. AI can never learn to be merciful, because showing mercy involves breaking a rule without having a different rule or sufficient cause to tell it to do so. Stealing is wrong, this is a rule we almost all learn from childhood. But in the famous opening to Les Misérables, Jean Valjean, a destitute convict, steals some silverware from Bishop Myriel who has provided him with hospitality. Valjean is soon caught by the police and faces a lifetime of imprisonment and forced labour for his crime. Yet the Bishop shows him mercy, falsely informing the police that the silverware was a gift and even adding two further candlesticks to the swag. Stealing is, objectively, still wrong, but the rule is temporarily suspended, or superseded, by the bishop’s wholly unruly act of mercy.   

Teaching his followers one day, Jesus stunned the crowd with a catalogue of unruly instructions. He said, “Give to everyone who asks of you,” and “Love your enemies” and “Do good to those who hate you.” The Gospel writers record that the crowd were amazed, astonished, even panicked! These were rules that challenged many assumptions about the “right” way to live – many of the social and religious “rules” of the day. And Jesus modelled this unruly way of life too – actively healing people on the designated day of rest, dining with social outcasts and having contact with those who had “unclean” illnesses such as leprosy. Overall, the message of Jesus was loud and clear, people matter more than rules.  

AI will never understand this, because to an AI people don’t actually exist, only rules exist. Rules can be programmed in manually or extracted from a data scrape, and one rule can be superseded by another rule, but beyond that a rule can never just be illogically or irrationally broken by a machine. Put more simply, AI can show us in a simplistic way what fairness ought to look like and can protect a judge from being punitive just because they are a bit hungry. There are many positive applications to the use of AI in overcoming humanity’s unconscious and illogical biases. But at the end of the day, only a human can look Jean Valjean in the eye and say, “Here, take these candlesticks too.”   

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief