Article
AI
Comment
4 min read

It's our mistakes that make us human

What we learn distinguishes us from tech.

Silvianne Aspray is a theologian and postdoctoral fellow at the University of Cambridge.

A man staring at a laptop grimmaces and holds his hands to his head.
Francisco De Legarreta C. on Unsplash.

The distinction between technology and human beings has become blurry: AI seems to be able to listen, answer our questions, even respond to our feelings. It becomes increasingly easy to confuse machines with humans. In this situation, it is increasingly important to ask: What makes us human, in distinction from machines? There are many answers to this question, but for now I would like to focus on just one aspect of what I think is distinctively human: As human beings, we live and learn in time.  

To be human means to be intrinsically temporal. We live in time and are oriented towards a future good. We are learning animals, and our learning is bound up with the taking of time. When we learn to know or to do something, we necessarily make mistakes, and we take practice. But keeping in view something we desire – a future good – we keep going.  

Let’s take the example of language. We acquire language in community over time. Toddlers make all sorts of hilarious mistakes when they first try to talk, and it takes them a long time even to get single words right, let alone to try and form sentences. But they keep trying, and they eventually learn. The same goes with love: Knowing how to love our family or our neighbours near and far is not something we are good at instantly. It is not the sort of learning where you absorb a piece of information and then you ‘get’ it. No, we learn it over time, we imitate others, we practice and even when we have learned, in the abstract, what it is to be loving, we keep getting it wrong. 

This, too, is part of what it means to be human: to make mistakes. Not the sort of mistakes machines make, when they classify some information wrongly, for instance, but the very human mistake of falling short of your own ideal. Of striving towards something you desire – happiness, in the broadest of terms – and yet falling short, in your actions, of that very goal. But there’s another very human thing right here: Human beings can also change. They – we – can have a change of heart, be transformed, and at some point in time, actually start to do the right thing – even against all the odds. Statistics of past behaviours, do not always correctly predict future outcomes. Part of being human means that we can be transformed.  

Transformation sometimes comes suddenly, when an overwhelming, awe-inspiring experience changes somebody’s life as by a bolt of lightning. Much more commonly, though, such transformation takes time. Through taking up small practices, we can form new habits, gradually acquire virtue, and do the right thing more often than not. This is so human: We are anything but perfect. As Christians would say: We have a tendency to entangle ourselves in the mess of sin and guilt. But we also bear the image of the Holy One who made us, and by the grace and favour of that One, we are not forever stuck in the mess. We are redeemed: are given the strength to keep trying, despite the mistakes we make, and given the grace to acquire virtue and become better people over time. All of this to say that being human means to live in time, and to learn in time. 

So, this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. 

Now compare this to the most complex of machines. We say that AI is able to “learn”. But what does it mean to learn, for AI? Machine learning is usually categorized into supervised learning, unsupervised and self-supervised learning. Supervised learning means that a model is trained for a specific task based on correctly labelled data. For instance, if a model is to predict whether a mammogram image contains a cancerous tumour, it is given many example images which are correctly classed as ‘contains cancer’ or ‘does not contain cancer’. That way, it is “taught” to recognise cancer in unlabelled mammograms. Unsupervised learning is different. Here, the system looks for patterns in the dataset it is given. It clusters and groups data without relying on predefined labels. Self-supervised learning uses both methods: Here, the system uses parts of the data itself as a kind of label – such as, for instance, predicting the upper half of an image from its lower half, or the next word in a given text. This is the predominant paradigm for how contemporary large-scale AI models “learn”.  

In each case, AI’s learning is necessarily based on data sets. Learning happens with reference to pre-given data, and in that sense with reference to the past. It may look like such models can consider the future, and have future goals, but only insofar as they have picked up patterns in past data, which they use to predict future patterns – as if the future was nothing but a repetition of the past.  

So this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. Machines, by contrast, are always oriented towards the past of the data that was fed to them. Human beings are intrinsically temporal beings, whereas machines are defined by temporality only in a very limited sense: it takes time to upload data, and for the data to be processed, for instance. Time, for machines, is nothing but an extension of the past, whereas for human beings, it is an invitation to and the possibility for being transformed for the sake of a future good. We, human beings, are intrinsically temporal, living in time towards a future good – which machines do not.  

In the face of new technologies we need a sharpened sense for the strange and awe-inspiring species that is the human race, and cultivate a new sense of wonder about humanity itself.  

1,000th Article
AI
Creed
Death & life
Digital
6 min read

AI deadbots are no way to cope with grief

The data we leave in the cloud will haunt and deceive those we leave behind.

Graham is the Director of the Centre for Cultural Witness and a former Bishop of Kensington.

A tarnished humaniod robot rests its head to the side, its LED eyes look to the camera.
Nicholas Fuentes on Unsplash.

What happens to all your data when you die? Over the years, like most people, I've produced a huge number of documents, letters, photos, social media posts, recordings of my voice, all of which exist somewhere out there in the cloud (the digital, not the heavenly one). When I die, what will happen to it all? I can't imagine anyone taking the time to climb into my Dropbox folder or Instagram account and delete it all? Does all this stuff remain out there cluttering up cyberspace like defunct satellites orbiting the earth?  

The other day I came across one way it might have a future - the idea of ‘deadbots’. Apparently, AI has now developed to such an extent that it can simulate the personality, speech patterns and thoughts of a deceased person. In centuries past, most people did not leave behind much record of their existence. Maybe a small number of possessions, memories in the minds of those who knew them, perhaps a few letters. Now we leave behind a whole swathe of data about us. AI is now capable of taking all this data and creating a kind of animated avatar, representing the deceased person, known as a ‘deadbot’ or even more weirdly, a ‘griefbot’. 

You can feel the attraction. An organisation called ‘Project December’ promises to ‘simulate the dead’, offering a ghostly video centred around the words ‘it’s been so long: I miss you.’ For someone stricken with grief, wondering whether there's any future in life now that their loved one has gone, feeling the aching space in the double bed, breakfast alone, the silence where conversation once filled the air, the temptation to be able to continue to interact and talk with a version of the deceased might be irresistible. 

There is already a developing ripple of concern about this ‘digital afterlife industry’. A recent article in Aeon explored the ethical dilemmas. Researchers in Cambridge University have already called for the need for safety protocols against the social and psychological damage that such technology might cause. They focus on the potential for unscrupulous marketers to spam surviving family or friends with the message that they really need XXX because ‘it's what Jim would have wanted’. You can imagine the bereaved ending up being effectively haunted by the ‘deadbot’, and unable to deal with grief healthily. It can be hard to resist for those whose grief is all-consuming and persistent. 

Yet it's not just the financial dangers, the possibility of abuse that troubles me. It's the deception involved which seems to me to operate in at a number of ways. And it's theology that helps identify the problems.  

The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a Zoom call in place of a person-to-person encounter. 

An AI-generated representation of a deceased partner might provide an opportunity for conversation, but it can never replicate the person. One of the great heresies of our age (one we got from René Descartes back in the seventeenth century) is the utter dualism between body and soul. It is the idea that we have some kind of inner self, a disembodied soul or mind which exists quite separately from the body. We sometimes talk about bodies as things that we have rather than things that we are. The anthropology taught within the pages of the Bible, however, suggests we are not disembodied souls but embodied persons, so much so that after death, we don't dissipate like ethereal ‘software’ liberated from the ‘hardware’ of the body, but we are to be clothed with new resurrection bodies continuous with, but different from the ones that we possess right now. 

We learned about the importance of our bodies during the COVID pandemic. When we were reduced to communicating via endless Zoom calls, we realised that while they were better than nothing, they could not replicate the reality of face-to-face bodily communication. A Zoom call couldn't pick up the subtle messages of body language. We missed the importance of touch and even the occasional embrace. Our bodies are part of who we are. We are not souls that happen to temporarily inhabit a body, inner selves that are the really important bit of us, with the body an ancillary, malleable thing that we don't ultimately need. The offer of a disembodied, AI-generated replication of the person is a thin paltry offering, as dissatisfying as a virtual meeting in place of a person-to-person encounter. 

Another problem I have with deadbots, is that they fix a person in time, like a fossilised version of the person who once lived. AI can only work with what that person has left behind - the recordings, the documents, the data which they produced while they were alive. And yet a crucial part of being human is the capacity to develop and change. As life continues, we grow, we shift, our priorities change. Hopefully we learn greater wisdom. That is part of the point of conversation, that we learn things, it changes us in interaction with others. There is the possibility of spiritual development of maturity, of redemption. A deadbot cannot do that. It cannot be redeemed, it cannot be transformed, because it is, to quote U2, stuck in a moment, and you can’t get out of it.  

This is all of a piece with a general trajectory in our culture which is to deny the reality of death. For Christians, death is an intruder. Death - or at least the form in which we know it, that of loss, dereliction, sadness - was not part of the original plan. It doesn't belong here, and we long for the day when one day it will be banished for good. You don’t have to be a Christian to feel the pain of grief, but paradoxically it's only when you have a firm sense of hope that death is a defeated enemy, that you can take it seriously as a real enemy. Without that hope, all you can do is minimise it, pretend it doesn't really matter, hold funerals that try to be relentlessly cheerful, denying the inevitable sense of tragedy and loss that they were always meant to express.  

Deadbots are a feeble attempt to try to ignore the deep gulf that lies between us and the dead. In one of his parables, Jesus once depicted a conversation between the living and the dead:  

“between you and us a great chasm has been fixed, so that those who might want to pass from here to you cannot do so, and no one can cross from there to us.”  

Deadbots, like ‘direct cremations’, where the body is disposed without any funeral, denying the bereaved the chance to grieve, like the language around assisted dying that death is ‘nothing at all’ and therefore can be deliberately hastened, are an attempt to bridge that great chasm, which, this side of the resurrection, we cannot do. 

Deadbots in one sense are a testimony to our remarkable powers of invention. Yet they cannot ultimately get around our embodied nature, offer the possibility of redemption, or deal with the grim reality of death. They offer a pale imitation of the source of true hope - the resurrection of the body, the prospect of meeting our loved ones again, yet transformed and fulfilled in the presence of God, even if it means painful yet hopeful patience and waiting until that day. 

Celebrate with us - we're 2!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief