Article
Character
Comment
Friendship
Virtues
4 min read

As algorithms divide us, who should we be loyal to?

An ethicist’s answer, shows we need courage and wisdom too.

Isaac is a PhD candidate in Theology at Durham University and preparing for priesthood in the Church of England.

Three people sitting looking out over viewpoint are silhouetted against the sky.
Priscilla Du Preez on Unsplash.

What is loyalty? As we plunge into this new year of 2025 it seems as pressing a question as ever. The war in Ukraine rumbles on, a fresh Labour government continues to struggle with public opinion, and America returns to the unpredictable rule of the first president in its history to be a convicted felon. The algorithms of social media continue to segregate and amplify different audiences into ever more closed feedback loops and echo chambers. This may bolster loyalty to a point of view, but estrange us further from our friends and neighbours whose loyalties lie elsewhere. All of these and many other cases highlight the conflict of loyalties in our society and wider world. What is even more obvious is that if we are to make peace, cultivate love for enemies, and pursue the common good, then perhaps the most in-demand virtue of 2025, at the top over every wish list, might just be loyalty.  

But what really is loyalty?  

I was struck by a persuasive answer given by Dr Tony Milligan, research fellow in philosophical ethics at King’s College London, during his appearance on a recent episode of The Moral Maze on BBC Radio 4 that asked ‘is loyalty a virtue or a vice?’ He said loyalty is, “Sharing another person’s commitments and the willingness to go through various kinds of adversity in order to pursue those commitments and to further them.” Under cross examination and asked if loyalty is then an absolute virtue he responded, “I think that it’s absolute in the sense that we absolutely need to have it, that it’s basic to the human condition and not optional.” His second interrogator, Giles Fraser, then suggested a ‘high doctrine of mates’. In this doctrine you are loyal to your mates in all circumstances, even if they are ‘wrong-uns’. Dr Milligan’s response, when asked how he would characterise this ‘doctrine of mates’ position, was fascinating: “Addiction.” Fraser then asked if that addiction could be love. “It’s a case of love, and we don’t get to choose the people that we love. We find ourselves in the predicament and then try to make the best of it…I love my wife Susanne, I’ve been with her 31 years, and it’s love, and it’s also addiction. I just can’t envisage a world in which I would be without her.” This framed Dr Milligan’s final powerful point: love, and the loyalty which love entails, gives us our sense of value.  

I can bear witness to the truth of Dr Milligan’s intertwining of love and loyalty. Last autumn I became a father for the second time. My love for my eldest is so great that there was a real question: ‘if my love for my eldest is so total, so all encompassing, how can I possibly love a second as much?’ This question melted away as I gazed into her screwed-up face, moments after she entered the world. I am completely dedicated to ensuring that she flourishes and I would “go through various kinds of adversity in order to pursue” her flourishing. As Dr Mulligan also said, loyalty “is basic to the human condition and not optional.” Of course, how this total and non-zero-sum loyalty of love to both of my children actually works in practice requires of me thoughtful negotiation. If one wants to go to the park and the other wants to go to the swimming pool I cannot split in two and do both things at once. Loyalty, as finite human beings, requires wisdom in living in the middle of a messy network of demands and desires, of the preferences and needs of others. 

If loyalty is then one thing, it is the willingness to recognise that we are tied to other people, whether we like it or not. Cain’s question to God, when God came looking for Abel, is still pertinent: “Am I my brother’s keeper?” Perhaps the greatest disloyalty is the implied ‘no’ in Cain’s rhetorical question. In denying that he is bound to his brother he is disloyal not only to Abel, but to himself because he denies his own humanity and isolates himself from the humanity of other people. If we isolate ourselves, having loyalty only to ourselves, we lose the joy of being fully human. If we simply kill those we dislike, whether literally (in war or murder) or metaphorically (‘unfriending’, cancelling, pretending they do not exist), then we follow Cain. Loyalty, as the tie that binds us to the messiness of the real world where people vehemently disagree all the time, requires not only wisdom then but courage also. It takes courage to commit to one person in marriage. It takes courage to raise a child. It takes courage to continue to talk with and to love those with whom you deeply disagree.  

When practising our 2025 New Year’s resolutions let us make sure that amongst the commitments to get back to the gym and practice that new hobby that we remember to practice loyalty. Loyalty not only to those we love, but to those we might come to love. Let us be wise enough and brave enough to be fettered to those with whom we disagree, loyal to the humanity that binds us together.

Join with us - Behind the Seen

Seen & Unseen is free for everyone and is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Alongside other benefits (book discounts etc.), you’ll receive an extra fortnightly email from me sharing what I’m reading and my reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief

Article
AI
Culture
Generosity
Psychology
Virtues
5 min read

AI will never codify the unruly instructions that make us human

The many exceptions to the rules are what make us human.
A desperate man wearing 18th century clothes holds candlesticks
Jean Valjean and the candlesticks, in Les Misérables.

On average, students with surnames beginning in the letters A-E get higher grades than those who come later in the alphabet. Good looking people get more favourable divorce settlements through the courts, and higher payouts for damages. Tall people are more likely to get promoted than their shorter colleagues, and judges give out harsher sentences just before lunch. It is clear that human judgement is problematically biased – sometimes with significant consequences. 

But imagine you were on the receiving end of such treatment, and wanted to appeal your overly harsh sentence, your unfair court settlement or your punitive essay grade: is Artificial Intelligence the answer? Is AI intelligent enough to review the evidence, consider the rules, ignore human vagaries, and issue an impartial, more sophisticated outcome?  

In many cases, the short answer is yes. Conveniently, AI can review 50 CVs, conduct 50 “chatbot” style interviews, and identify which candidates best fit the criteria for promotion. But is the short and convenient answer always what we want? In their recent publication, As If Human: Ethics and Artificial Intelligence, Nigel Shadbolt and Roger Hampson discuss research which shows that, if wrongly condemned to be shot by a military court but given one last appeal, most people would prefer to appeal in person to a human judge than have the facts of their case reviewed by an AI computer. Likewise, terminally ill patients indicate a preference for doctor’s opinions over computer calculations on when to withdraw life sustaining treatment, even though a computer has a higher predictive power to judge when someone’s life might be coming to an end. This preference may seem counterintuitive, but apparently the cold impartiality—and at times, the impenetrability—of machine logic might work for promotions, but fails to satisfy the desire for human dignity when it comes to matters of life and death.  

In addition, Shadbolt and Hampson make the point that AI is actually much less intelligent than many of us tend to think. An AI machine can be instructed to apply certain rules to decision making and can apply those rules even in quite complex situations, but the determination of those rules can only happen in one of two ways: either the rules must be invented or predetermined by whoever programmes the machine, or the rules must be observable to a “Large Language Model” AI when it scrapes the internet to observe common and typical aspects of human behaviour.  

The former option, deciding the rules in advance, is by no means straightforward. Humans abide by a complex web of intersecting ethical codes, often slipping seamlessly between utilitarianism (what achieves the most amount of good for the most amount of people?) virtue ethics (what makes me a good person?) and theological or deontological ideas (what does God or wider society expect me to do?) This complexity, as Shadbolt and Hampson observe, means that: 

“Contemporary intellectual discourse has not even the beginnings of an agreed universal basis for notions of good and evil, or right and wrong.”  

The solution might be option two – to ask AI to do a data scrape of human behaviour and use its superior processing power to determine if there actually is some sort of universal basis to our ethical codes, perhaps one that humanity hasn’t noticed yet. For example, you might instruct a large language model AI to find 1,000,000 instances of a particular pro-social act, such as generous giving, and from that to determine a universal set of rules for what counts as generosity. This is an experiment that has not yet been done, probably because it is unlikely to yield satisfactory results. After all, what is real generosity? Isn’t the truly generous person one who makes a generous gesture even when it is not socially appropriate to do so? The rule of real generosity is that it breaks the rules.  

Generosity is not the only human virtue which defies being codified – mercy falls at exactly the same hurdle. AI can never learn to be merciful, because showing mercy involves breaking a rule without having a different rule or sufficient cause to tell it to do so. Stealing is wrong, this is a rule we almost all learn from childhood. But in the famous opening to Les Misérables, Jean Valjean, a destitute convict, steals some silverware from Bishop Myriel who has provided him with hospitality. Valjean is soon caught by the police and faces a lifetime of imprisonment and forced labour for his crime. Yet the Bishop shows him mercy, falsely informing the police that the silverware was a gift and even adding two further candlesticks to the swag. Stealing is, objectively, still wrong, but the rule is temporarily suspended, or superseded, by the bishop’s wholly unruly act of mercy.   

Teaching his followers one day, Jesus stunned the crowd with a catalogue of unruly instructions. He said, “Give to everyone who asks of you,” and “Love your enemies” and “Do good to those who hate you.” The Gospel writers record that the crowd were amazed, astonished, even panicked! These were rules that challenged many assumptions about the “right” way to live – many of the social and religious “rules” of the day. And Jesus modelled this unruly way of life too – actively healing people on the designated day of rest, dining with social outcasts and having contact with those who had “unclean” illnesses such as leprosy. Overall, the message of Jesus was loud and clear, people matter more than rules.  

AI will never understand this, because to an AI people don’t actually exist, only rules exist. Rules can be programmed in manually or extracted from a data scrape, and one rule can be superseded by another rule, but beyond that a rule can never just be illogically or irrationally broken by a machine. Put more simply, AI can show us in a simplistic way what fairness ought to look like and can protect a judge from being punitive just because they are a bit hungry. There are many positive applications to the use of AI in overcoming humanity’s unconscious and illogical biases. But at the end of the day, only a human can look Jean Valjean in the eye and say, “Here, take these candlesticks too.”   

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief