Column
Culture
Football
Humility
Sport
4 min read

We're pretty useless really

We all fail. Not just Southgate, Biden and Sunak.

George is a visiting fellow at the London School of Economics and an Anglican priest.

A dejected looking football manager ponders his feet while standing beside a pitch.
Southgate contemplates.

The Book of Heroic Failures, published by Stephen Pile in 1979, records a story of the Welsh Dean of St Asaph, Daniel Price, in the late 17th century. Contemporary biographer John Aubrey noted that Price was a “mighty Pontificall proud man.” 

So proud that he declined to parade on foot outside his cathedral, but rather rode a mare in full vestments, reading from the Book of Common Prayer. Aubrey with precise economy describes what happened next: “A stallion happened to break loose, and smelled the mare, and ran and leapt her, and held the reverend dean all the time so hard in his embraces, that he could not get off till the horse had done his business.” 

Unsurprisingly, Aubrey records that the good Dean “would never ride in procession afterwards.” He had clearly learned a lesson in humility. And one that would not have been taught had his ride passed with pompous dignity. 

A question arises, pertinent for events today, as to whether we learn more from the indignity of failure than from the fruits of success. I’d like to suggest that we do, especially about the nature of our human condition. 

Humans are pretty useless really and our default position is error and falling short.

No one doubts that had England won the European Football Championship it would have been the crowning adornment to manager Gareth Southgate’s career. England failed to do that, though we failed less than any other team (Spain doesn’t count because they didn’t fail at all). Now that Southgate has resigned and has time to reflect at leisure, perhaps he will learn at least as much and possibly very much more about himself than if he had raised the trophy. 

US president Joe Biden would have had an altogether greater reckoning to face if he lost the election to Donald Trump than if he won it. Now he’s quit the race, arguably he has much more to learn from reflecting on his life and achievements. The Conservative Party has many lessons to learn about its 14 years in power from its abject defeat at the polls. Indeed, many parliamentary Tories believe that defeat was a requisite event for its reformation to proceed. 

None of this is to suggest that failure of itself is a virtue. Nor is it just a morality tale that enjoins us to meet triumph and disaster and “treat those two impostors just the same”. A failed marriage, or failing health, or moral failures of a wider variety, cause destructive pain and trauma. 

But it is to acknowledge that failure is part of the natural human condition. We’re in the territory of a flawed, fallen humanity here, one that theologians call postlapsarian, that is fallen from an ideal of perfection as dramatically portrayed in the Garden of Eden. Humans are pretty useless really and our default position is error and falling short. 

Loss of innocence, injustice and failure meet in unholy alliance at Golgotha.

This isn’t, or should not be, depressing. At least not for people of faith, because it reflects the nature of humanity. Failure, if you will, is a gift of God in a fallen creation. We learn more from our failures than our successes, which is either a biological determinism in evolution or a means through which we strive for a new perfection. There’s a version of that they may be reciting to the England football team right now. 

Christian faith sometimes concentrates too often on triumph over death and the idea of a heavenly kingdom where all is well, at the expense of recognising the reality of our world in which most things are very far indeed from well.  

We might recognise it in a congregational tendency to skip over Good Friday to Easter morning. If we do so, we neglect to notice what an abject failure the insurgent Jesus movement was on its short journey of break-up from Jerusalem to Calvary. It, literally, dies. 

Yes, we know what happens next. Or do we? The first witnesses to it certainly struggle to explain it in a manner that we might comprehend. But, in any event, loss of innocence, injustice and failure meet in unholy alliance at Golgotha. 

The theologian John Macquarrie asks what happens if we feel compelled to draw the bottom line under the cross: “Would that destroy the whole fabric of faith in Christ? I do not think so, for the two great distinctive Christian affirmations would remain untouched – God is love, and God is revealed in Jesus Christ. These two affirmations would stand even if there were no mysteries beyond Calvary.” 

No, our story doesn’t end there. But we can acknowledge that this is where we live in this world, at the foot of that cross. As the 17th-century French philosopher Blaise Pascal put it, the Christ “will be in agony until the end of the world.” 

Let’s not be too miserable, because we do have the “mysteries beyond Calvary”. And let’s celebrate our earthly successes. But let’s also learn to embrace our failures and receive them as a gift, from football to politics. 

Article
AI
Culture
Generosity
Psychology
Virtues
5 min read

AI will never codify the unruly instructions that make us human

The many exceptions to the rules are what make us human.
A desperate man wearing 18th century clothes holds candlesticks
Jean Valjean and the candlesticks, in Les Misérables.

On average, students with surnames beginning in the letters A-E get higher grades than those who come later in the alphabet. Good looking people get more favourable divorce settlements through the courts, and higher payouts for damages. Tall people are more likely to get promoted than their shorter colleagues, and judges give out harsher sentences just before lunch. It is clear that human judgement is problematically biased – sometimes with significant consequences. 

But imagine you were on the receiving end of such treatment, and wanted to appeal your overly harsh sentence, your unfair court settlement or your punitive essay grade: is Artificial Intelligence the answer? Is AI intelligent enough to review the evidence, consider the rules, ignore human vagaries, and issue an impartial, more sophisticated outcome?  

In many cases, the short answer is yes. Conveniently, AI can review 50 CVs, conduct 50 “chatbot” style interviews, and identify which candidates best fit the criteria for promotion. But is the short and convenient answer always what we want? In their recent publication, As If Human: Ethics and Artificial Intelligence, Nigel Shadbolt and Roger Hampson discuss research which shows that, if wrongly condemned to be shot by a military court but given one last appeal, most people would prefer to appeal in person to a human judge than have the facts of their case reviewed by an AI computer. Likewise, terminally ill patients indicate a preference for doctor’s opinions over computer calculations on when to withdraw life sustaining treatment, even though a computer has a higher predictive power to judge when someone’s life might be coming to an end. This preference may seem counterintuitive, but apparently the cold impartiality—and at times, the impenetrability—of machine logic might work for promotions, but fails to satisfy the desire for human dignity when it comes to matters of life and death.  

In addition, Shadbolt and Hampson make the point that AI is actually much less intelligent than many of us tend to think. An AI machine can be instructed to apply certain rules to decision making and can apply those rules even in quite complex situations, but the determination of those rules can only happen in one of two ways: either the rules must be invented or predetermined by whoever programmes the machine, or the rules must be observable to a “Large Language Model” AI when it scrapes the internet to observe common and typical aspects of human behaviour.  

The former option, deciding the rules in advance, is by no means straightforward. Humans abide by a complex web of intersecting ethical codes, often slipping seamlessly between utilitarianism (what achieves the most amount of good for the most amount of people?) virtue ethics (what makes me a good person?) and theological or deontological ideas (what does God or wider society expect me to do?) This complexity, as Shadbolt and Hampson observe, means that: 

“Contemporary intellectual discourse has not even the beginnings of an agreed universal basis for notions of good and evil, or right and wrong.”  

The solution might be option two – to ask AI to do a data scrape of human behaviour and use its superior processing power to determine if there actually is some sort of universal basis to our ethical codes, perhaps one that humanity hasn’t noticed yet. For example, you might instruct a large language model AI to find 1,000,000 instances of a particular pro-social act, such as generous giving, and from that to determine a universal set of rules for what counts as generosity. This is an experiment that has not yet been done, probably because it is unlikely to yield satisfactory results. After all, what is real generosity? Isn’t the truly generous person one who makes a generous gesture even when it is not socially appropriate to do so? The rule of real generosity is that it breaks the rules.  

Generosity is not the only human virtue which defies being codified – mercy falls at exactly the same hurdle. AI can never learn to be merciful, because showing mercy involves breaking a rule without having a different rule or sufficient cause to tell it to do so. Stealing is wrong, this is a rule we almost all learn from childhood. But in the famous opening to Les Misérables, Jean Valjean, a destitute convict, steals some silverware from Bishop Myriel who has provided him with hospitality. Valjean is soon caught by the police and faces a lifetime of imprisonment and forced labour for his crime. Yet the Bishop shows him mercy, falsely informing the police that the silverware was a gift and even adding two further candlesticks to the swag. Stealing is, objectively, still wrong, but the rule is temporarily suspended, or superseded, by the bishop’s wholly unruly act of mercy.   

Teaching his followers one day, Jesus stunned the crowd with a catalogue of unruly instructions. He said, “Give to everyone who asks of you,” and “Love your enemies” and “Do good to those who hate you.” The Gospel writers record that the crowd were amazed, astonished, even panicked! These were rules that challenged many assumptions about the “right” way to live – many of the social and religious “rules” of the day. And Jesus modelled this unruly way of life too – actively healing people on the designated day of rest, dining with social outcasts and having contact with those who had “unclean” illnesses such as leprosy. Overall, the message of Jesus was loud and clear, people matter more than rules.  

AI will never understand this, because to an AI people don’t actually exist, only rules exist. Rules can be programmed in manually or extracted from a data scrape, and one rule can be superseded by another rule, but beyond that a rule can never just be illogically or irrationally broken by a machine. Put more simply, AI can show us in a simplistic way what fairness ought to look like and can protect a judge from being punitive just because they are a bit hungry. There are many positive applications to the use of AI in overcoming humanity’s unconscious and illogical biases. But at the end of the day, only a human can look Jean Valjean in the eye and say, “Here, take these candlesticks too.”   

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief