Article
AI
Comment
4 min read

It's our mistakes that make us human

What we learn distinguishes us from tech.

Silvianne Aspray is a theologian and postdoctoral fellow at the University of Cambridge.

A man staring at a laptop grimmaces and holds his hands to his head.
Francisco De Legarreta C. on Unsplash.

The distinction between technology and human beings has become blurry: AI seems to be able to listen, answer our questions, even respond to our feelings. It becomes increasingly easy to confuse machines with humans. In this situation, it is increasingly important to ask: What makes us human, in distinction from machines? There are many answers to this question, but for now I would like to focus on just one aspect of what I think is distinctively human: As human beings, we live and learn in time.  

To be human means to be intrinsically temporal. We live in time and are oriented towards a future good. We are learning animals, and our learning is bound up with the taking of time. When we learn to know or to do something, we necessarily make mistakes, and we take practice. But keeping in view something we desire – a future good – we keep going.  

Let’s take the example of language. We acquire language in community over time. Toddlers make all sorts of hilarious mistakes when they first try to talk, and it takes them a long time even to get single words right, let alone to try and form sentences. But they keep trying, and they eventually learn. The same goes with love: Knowing how to love our family or our neighbours near and far is not something we are good at instantly. It is not the sort of learning where you absorb a piece of information and then you ‘get’ it. No, we learn it over time, we imitate others, we practice and even when we have learned, in the abstract, what it is to be loving, we keep getting it wrong. 

This, too, is part of what it means to be human: to make mistakes. Not the sort of mistakes machines make, when they classify some information wrongly, for instance, but the very human mistake of falling short of your own ideal. Of striving towards something you desire – happiness, in the broadest of terms – and yet falling short, in your actions, of that very goal. But there’s another very human thing right here: Human beings can also change. They – we – can have a change of heart, be transformed, and at some point in time, actually start to do the right thing – even against all the odds. Statistics of past behaviours, do not always correctly predict future outcomes. Part of being human means that we can be transformed.  

Transformation sometimes comes suddenly, when an overwhelming, awe-inspiring experience changes somebody’s life as by a bolt of lightning. Much more commonly, though, such transformation takes time. Through taking up small practices, we can form new habits, gradually acquire virtue, and do the right thing more often than not. This is so human: We are anything but perfect. As Christians would say: We have a tendency to entangle ourselves in the mess of sin and guilt. But we also bear the image of the Holy One who made us, and by the grace and favour of that One, we are not forever stuck in the mess. We are redeemed: are given the strength to keep trying, despite the mistakes we make, and given the grace to acquire virtue and become better people over time. All of this to say that being human means to live in time, and to learn in time. 

So, this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. 

Now compare this to the most complex of machines. We say that AI is able to “learn”. But what does it mean to learn, for AI? Machine learning is usually categorized into supervised learning, unsupervised and self-supervised learning. Supervised learning means that a model is trained for a specific task based on correctly labelled data. For instance, if a model is to predict whether a mammogram image contains a cancerous tumour, it is given many example images which are correctly classed as ‘contains cancer’ or ‘does not contain cancer’. That way, it is “taught” to recognise cancer in unlabelled mammograms. Unsupervised learning is different. Here, the system looks for patterns in the dataset it is given. It clusters and groups data without relying on predefined labels. Self-supervised learning uses both methods: Here, the system uses parts of the data itself as a kind of label – such as, for instance, predicting the upper half of an image from its lower half, or the next word in a given text. This is the predominant paradigm for how contemporary large-scale AI models “learn”.  

In each case, AI’s learning is necessarily based on data sets. Learning happens with reference to pre-given data, and in that sense with reference to the past. It may look like such models can consider the future, and have future goals, but only insofar as they have picked up patterns in past data, which they use to predict future patterns – as if the future was nothing but a repetition of the past.  

So this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. Machines, by contrast, are always oriented towards the past of the data that was fed to them. Human beings are intrinsically temporal beings, whereas machines are defined by temporality only in a very limited sense: it takes time to upload data, and for the data to be processed, for instance. Time, for machines, is nothing but an extension of the past, whereas for human beings, it is an invitation to and the possibility for being transformed for the sake of a future good. We, human beings, are intrinsically temporal, living in time towards a future good – which machines do not.  

In the face of new technologies we need a sharpened sense for the strange and awe-inspiring species that is the human race, and cultivate a new sense of wonder about humanity itself.  

Article
Assisted dying
Care
Comment
Politics
6 min read

Assisted dying’s problems are unsolvable

There’s hollow rhetoric on keeping people safe from coercion.

Jamie Gillies is a commentator on politics and culture.

Members of a parliamentary committee sit at a curving table, in front of which a video screen shows other participants.
A parliamentary committee scrutinises the bill.
Parliament TV.

One in five people given six months to live by an NHS doctor are still alive three years later, data from the Department of Work and Pensions shows. This is good news for these individuals, and bad news for ‘assisted dying’ campaigners. Two ‘assisted dying’ Bills are being considered by UK Parliamentarians at present, one at Westminster and the other at the Scottish Parliament. And both rely on accurate prognosis as a ‘safeguard’ - they seek to cover people with terminal illnesses who are not expected to recover. 

An obvious problem with this approach is the fact, evidenced above, that doctors cannot be sure how a patient’s condition is going to develop. Doctors try their best to gauge how much time a person has left, but they often get prognosis wrong. People can go on to live months and even years longer than estimated. They can even make a complete recovery. This happened to a man I knew who was diagnosed with terminal cancer and told he had six months left but went on to live a further twelve years. Prognosis is far from an exact science. 

All of this raises the disturbing thought that if the UK ‘assisted dying’ Bills become law, people will inevitably end their lives due to well-meaning but incorrect advice from doctors. Patients who believe their condition is going to deteriorate rapidly — that they may soon face very difficult experiences — will choose suicide with the help of a doctor, when in fact they would have gone on to a very different season of life. Perhaps years of invaluable time with loved ones, new births and marriages in their families, and restored relationships. 

Accurate prognosis is far from the only problem inherent to ‘assisted dying’, however, as critics of this practice made clear at the – now concluded – oral evidence sessions held by committees scrutinising UK Bills. Proponents of Kim Leadbeater’s Terminally Ill Adults (End of Life) Bill and Liam McArthur’s Assisted Dying for Terminally Ill Adults (Scotland) Bill have claimed that their proposals will usher in ‘safe’ laws, but statements by experts show this rhetoric to be hollow. These Bills, like others before them, are beset by unsolvable problems. 

Coercion 

Take, for example, the issue of coercion. People who understand coercive control know that it is an insidious crime that’s hard to detect. Consequently, there are few prosecutions. Doctors are not trained to identify foul play and even if they were, these busy professionals with dozens if not hundreds of patients could hardly be counted on to spot every case. People would fall through the cracks. The CEO of Hourglass, a charity that works to prevent the abuse of older people, told MPs on the committee overseeing Kim Leadbeater’s Bill that "coercion is underplayed significantly" in cases, and stressed that it takes place behind closed doors. 

There is also nothing in either UK Bill that would rule out people acting on internal pressure to opt for assisted death. In evidence to the Scottish Parliament’s Health, Social Care and Sport Committee last month, Dr Gordon MacDonald, CEO of Care Not Killing, said: “You also have to consider the autonomy of other people who might feel pressured into assisted dying or feel burdensome. Having the option available would add to that burden and pressure.” 

What legal clause could possibly remove this threat? Some people would feel an obligation to ‘make way’ in order to avoid inheritance money being spent on personal care. Some would die due to the emotional strain they feel they are putting on their loved ones. Should our society really legislate for this situation? As campaigners have noted, it is likely that a ‘right to die’ will be seen as a ‘duty to die’ by some. Paving the way for this would surely be a moral failure. 

Inequality 

Even parliamentarians who support assisted suicide in principle ought to recognise that people will not approach the option of an ‘assisted death’ on an equal footing. This is another unsolvable problem. A middle-class citizen who has a strong family support network and enough savings to pay for care may view assisted death as needless, or a ‘last resort’. A person grappling with poverty, social isolation, and insufficient healthcare or disability support would approach it very differently. This person’s ‘choice’ would be by a dearth of support. 

As Disability Studies Scholar Dr Miro Griffiths told the Scottish Parliament committee last month, “many communities facing injustice will be presented with this as a choice, but it will seem like a path they have to go down due to the inequalities they face”. Assisted suicide will compound existing disparities in the worst way: people will remove themselves from society after losing hope that society will remove the inequalities they face. 

Politicians should also assess the claim that assisted deaths are “compassionate”. The rhetoric of campaigners vying for a change in the law have led many to believe that it is a “good death” — a “gentle goodnight”, compared to the agony of a prolonged natural death from terminal illness. However, senior palliative medics underline the fact that assisted deaths are accompanied by distressing complications. They can also take wildly different amounts of time: one hour; several hours; even days. Many people would not consider a prolonged death by drug overdose as anguished family members watch on to be compassionate. 

Suicide prevention 

 It is very important to consider the moral danger involved with changing our societal approach to suicide. Assisted suicide violates the fundamental principle behind suicide prevention — that every life is inherently valuable, equal in value, and deserving of protection. It creates a two-tier society where some lives are seen as not worth living, and the value of human life is seen as merely extrinsic and conditional. This approach offers a much lower view of human dignity than the one we have ascribed to historically, which has benefited our society so much.  

Professor Allan House, a psychiatrist who appeared before the Westminster Committee that’s considering Kim Leadbeater’s Bill, described the danger of taking this step well: “We’d have to change our national suicide prevention strategy, because at the moment it includes identifying suicidal thoughts in people with severe physical illness as something that merits intervention – and that intervention is not an intervention to help people proceed to suicide.” 

 Professor House expressed concern that this would “change both the medical and societal approach to suicide prevention in general”, adding: “There is no evidence that introducing this sort of legislation reduces what we might call ‘unassisted suicide’.” He also noted that in the last ten years in the State of Oregon – a jurisdiction often held up as a model by ‘assisted dying’ campaigners – “the number of people going through the assisted dying programme has gone up five hundred percent, and the number of suicides have gone up twenty per cent”. 

The evidence of various experts demonstrates that problems associated with assisted suicide are unsolvable. And this practice does not provide a true recognition of human dignity. Instead of changing the law, UK politicians must double down on existing, life-affirming responses to the suffering that accompanies serious illness. The progress we have made in areas like palliative medicine, and the talent and technology available to us in 2025, makes another path forwards available to leaders if they choose to take it. I pray they will. 

Join with us - Behind the Seen

Seen & Unseen is free for everyone and is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Alongside other benefits (book discounts etc.), you’ll receive an extra fortnightly email from me sharing what I’m reading and my reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief