Article
AI
Culture
10 min read

We’ll learn to live with AI: here’s how

AI might just help us with life’s dilemmas, if we are responsible.

Andrew is Emeritus Professor of Nanomaterials at the University of Oxford. 

Two construction workers stand and talk with a humanoid AI colleague.
Nick Jones/Midjourney.ai

Anxiety about algorithms is nothing new.  Back in 2020, It was a bad summer for the public image of algorithms. ‘I am afraid your grades were almost derailed by a mutant algorithm’, the then Prime Minister told pupils at a school. No topic in higher education is more sensitive than who gets a place at which university, and the thought that unfair decisions might be based on an errant algorithm caused understandable consternation. That algorithms have been used for many decades with widespread acceptance for coping with examination issues ranging from individual ill health to study of the wrong set text by a whole school seems quietly to have slipped under the radar.  

Algorithmic decision-making is not new. Go back thousands of years to Hebrew Deuteronomic law: if a man had sex with a woman who was engaged to be married to another man, then this was unconditionally a capital offence for the man. But for the woman it depended on the circumstances. If it occurred in a city, then she would be regarded as culpable, on the grounds that she should have screamed for help. But if it occurred in the open country, then she was presumed innocent, since however loudly she might have cried out there would have been no one to hear her. This is a kind of algorithmic justice: IF in city THEN woman guilty ELSE woman not guilty.  

Artificial intelligence is undergoing a transition from classification to decision-making. Broad artificial intelligence, or artificial general intelligence (AGI), in which the machines set their own goals, is the subject of gripping movies and philosophical analysis. Experts disagree about whether or when AGI will be achieved. Narrow artificial intelligence (AI) is with us now, in the form of machine learning. Where previously computers were programmed to perform a task, now they are programmed to learn to perform a task.  

We use machine learning in my laboratory in Oxford. We undertake research on solid state devices for quantum technologies such as quantum computing. We cool a device to 1/50 of a degree above absolute zero, which is colder than anywhere in the universe that we know of outside a laboratory, and put one electron into each region, which may be only 1/1000 the diameter of a hair on your head. We then have to tune up the very delicate quantum states. Even for an experienced researcher this can take several hours. Our ‘machine’ has learned how to tune our quantum devices in less than 10 minutes.  

Students in the laboratory are now very reluctant to tune devices by hand. It is as if all your life you have been washing your shirts in the bathtub with a bar of soap. It may be tedious, but it is the only way to get your shirts clean, and you do it as cheerfully as you can … until one day you acquire a washing machine, so that all you have to do is put in the shirts and some detergent, shut the door and press the switch. You come back two hours later, and your shirts are clean. You never want to go back to washing them in the bathtub with a bar of soap. And no one wants to go back to doing experiments without the machine. In my laboratory the machine decides what the next measurement will be.  

Suppose that a machine came to know my preferences better than I can articulate them myself. The best professionals can already do this in their areas of expertise, and good friends sometimes seem to know us better than we know ourselves. 

Many tasks previously reserved for humans are now done by machine learning. Passport control at international airports uses machine learning for passport recognition. An experienced immigration officer who examines one passport per minute might have seen four million faces by the end of their career. The machines were trained on fifty million faces before they were put into service. No wonder they do well.  

Extraordinary benefits are being seen in health care. There is now a growing number of diagnostic studies in which the machines outperform humans, for example, in screening ultrasound scans or radiographs. Which would you rather be diagnosed by? An established human radiologist, or a machine with demonstrated superior performance? To put it another way, would you want to be diagnosed by a machine that knew less than your doctor? Answer: ‘No!’ Well then, would you want to be diagnosed by a doctor who knew less than the machine? That’s more difficult. Perhaps the question needs to be changed. Would you prefer to be treated by a doctor without machine learning or by a doctor making wise use of machine learning?  

If we want humans to be involved in decisions involving our health, how much more in decisions involving our liberty. But are humans completely reliable and consistent? A peer-reviewed study suggested that the probability of a favourable parole decision depended on whether the judges had had their lunch. The very fact that appeals are sometimes successful provides empirical evidence that law, like any other human endeavour, involves uncertainty and fallibility. When it became apparent that in the UK there was inconsistency in sentencing for similar offences, in what the press called a postcode lottery, the Sentencing Council for England and Wales was established to promote greater transparency and consistency in sentencing. The code sets out factors which judges must consider in passing sentence, and ranges of tariffs for different kinds of crimes. If you like, it is another step in algorithmic sentencing. Would you want a machine that is less consistent than a judge to pass sentence? See the sequence of questions above about a doctor.  

We may consider that judicial sentencing has a special case for human involvement because it involves restricting an individual’s freedom. What about democracy? How should citizens decide how to vote when given the opportunity?  Voter A may prioritise public services, and she may seek to identify the party (if the choices are between well identified parties) which will best promote education, health, law and order, and other services which she values. She may also have a concern for the poor and favour redistributive taxation. Voter B may have different priorities and seek simply to vote for the party which in his judgement will leave him best off. Other factors may come into play, such as the perceived trustworthiness of an individual candidate, or their ability to evoke empathy from fellow citizens.  

This kind of dilemma is something machines can help with, because they are good at multi-objective optimisation. A semiconductor industry might want chips that are as small as possible, and as fast as possible, and consume as little power as possible, and are as reliable as possible, and as cheap to manufacture as possible, but these requirements are in tension with one another. Techniques are becoming available to enable machines to make optimal decisions in such situations, and they may be better at them than humans. Suppose that a machine came to know my preferences better than I can articulate them myself. The best professionals can already do this in their areas of expertise, and good friends sometimes seem to know us better than we know ourselves. Suppose also that the machine was better than me at analysing which candidate if elected would be more likely to deliver the optimal combination of my preferences. Might there be something to be said for benefitting from that guidance?  

If we get it right, the technologies of the machine learning age will provide new opportunities for Homo fidelis to promote human flourishing at its best.

By this point you may be sucking air through your intellectual teeth. You may be increasingly alarmed about machines taking decisions that should be reserved for humans. What are the sources of such unease? One may be that, at least in deep neural networks, the decisions that machines make may be only as good as the data on which they have been trained. If a machine has learned from data in which black people have an above average rate of recidivism, then black people may be disadvantaged in parole decisions taken by the machine. But this is not an area in which humans are perfect; that is why we have hidden bias training. In the era of Black Lives Matter we scarcely need reminding that humans are not immune to prejudice.  

Another source of unease may be the use to which machine learning is put for commercial and political ends. If you think that machine learning is not already being applied to you, you are probably mistaken. Almost every time you do an online search or use social media, the big data companies are harvesting your data exhaust for their own ends. Even if your phone calls and emails are secure, they still generate metadata. European legislation is better than most, and the Online Safety Act 2023 will make the use of Internet services safer for individuals in the United Kingdom. But there is a limit to what regulation can protect, and 2024 is likely to see machine learning powerfully deployed to sway voters in elections in half the world. Targeted persuasion predates AI, as Othello’s Iago knew, but machine learning has brought it to an unprecedented level of industrialisation, with some of the best minds in the world paid some of the highest salaries in the world to maximise the user’s screen time and the personalisation of commercial and political influence.  

Need it be so? In some ways advances in machine learning are acting as the canary in the mine, alerting us to fundamental questions about what humans are for, and what it means to be human. The old model of Homo economicus—rational, selfish, greedy, lazy man—has passed its sell-by date. It is being replaced by what I like to call Homo fidelis—ethical, caring, generous, energetic woman and man. For as long as AGI remains science fiction, it is up to humans to determine what values the machines are to implement. If we get it right, the technologies of the machine learning age will provide new opportunities for Homo fidelis to promote human flourishing at its best.  

Whatever the future capabilities of machines, they cannot be morally load-bearing because humans are self-aware and mortal, whereas machines are not.

Paul Collier and John Kay

Christians have been thinking about what it means to be human for two millennia, building on what came before, and so they ought to have something to contribute to how humans flourish. In It Keeps Me Seeking, my co-authors and I ask our readers to imagine that they were writing about three thousand years ago for people who knew nothing of modern genetics or psychological science about what it means to be human. ‘You are writing for a storytelling culture, and so you would probably put it in the form of a story. Let’s say you set it in a garden. The garden is pleasant, but it is also designed for character formation, and so there is work to do, and also the possibility for a hard moral choice. You want to convey that humans need social interactions (for the same reason that solitary confinement is a severe punishment), and so you try the literary thought experiment of having one solitary man and letting him encounter animals and name them. Animals can be useful and they can be good company. But ultimately no animals, not even a dog, are fully satisfactory as partners in work and companions in life. Humans need humans. An enriching component of human relationships is sex. So, the supreme gift to the solitary man in our story is companionship with an equal who is both like and unlike; a woman. It is hardly a complete account, but it is a good start. Oh, and there is one other aspect. They should be free of the shame which lies at the root of so much psychological disorder.’  

As far as it goes, would you regard such an account as complete? If not, what would you add next? You can see where this is going. To be human you need to be responsible. So, you let the humans face the moral choice. You can even include an element of disinformation to make the choice harder. And then when it goes horribly wrong you let them discover that they are responsible for their actions, and that blaming one another does not help. If you have God in your story, then (uniquely for the humans) responsibility consists of accountability to God. This is how human distinctiveness was addressed in early Jewish thought. As an early articulation that to be human means to be responsible, the story of Adam and Eve is unsurpassed.  

In Greed is Dead, Paul Collier and John Kay reference Citizenship in a Networked Age as brilliantly elucidating the issue of morally pertinent decision-taking. They write, ‘Whatever the future capabilities of machines, they cannot be morally load-bearing because humans are self-aware and mortal, whereas machines are not. Machines can be used not only to complement and enhance human decision-making, but for bad: search optimisation has already morphed into influence-optimisation. We must keep morally pertinent decision-taking firmly in the domain of humanity.’  

The nature of humanity includes responsibility—for wise use of machine learning and much more besides. Accountability is part of life for people with widely differing philosophical, ethical, and religious world views. If we are willing to concede that accountability follows responsibility, then we should next ask, ‘Accountable to whom?’ 

Article
Books
Culture
Morality
Sin
7 min read

After the Salt Path revelations I’m liking it even more

We edit our own reality by the stories we tell ourselves

Roger is a Baptist minister, author and Senior Research Fellow at Spurgeon’s College in London. 

A newspaper front page shows its title and a falling sea bird
How The Observer broke the story.

The Observer held nothing back in its exposé headline:

“The real Salt Path: how a blockbuster book and film were spun from lies, deceit and desperation”

The truth behind the summer’s feel-good movie and the reputation of author Raynor Winn lie in tatters, shredded by the revelations unearthed by relentless investigative journalism.

The uplifting story of how a couple face financial ruin, homelessness and a terminal illness by walking the South West Coast Path has been an inspiration for many who’ve either read the book or seen the film, or both. The story works because it reflects back to us the life we know, the lives we live. And when you add the seaside of Somerset, Devon, Cornwall and Dorset, what’s not to love?

But now it needs to be seen in an altogether different light.

The article beneath the headline was thoroughly researched, carefully constructed and uncompromising in the allegations implied by the discoveries, observations and commentary of its narrative.

“… not her real name”

“… she was a thief … embezzled the money”

“… arrested and interviewed by the police”

“… five county court judgements”

“… they owned land in France”

“… nine neurologists … were sceptical”

Point by point the back story of the Salt Path is pulled apart.

First, Raynor and Moth Winn are not the “real”, “legal” names of Sally and Tim Walker.

Second, The Observer uncovered that the couple had money troubles for reasons other than the failed business investment they had claimed. Rather, as a part-time bookkeeper for an estate agent and property surveyor, Sally was accused of syphoning off £64,000 from the company’s accounts. Concerning which, it was reported that she was arrested and interviewed by the police.

Third, it was mounting debts from settling the matter with her former employer, alongside other debts, that actually led to the repossession of their home and their resulting homelessness. Not the failed business venture.

Fourth, they weren’t actually homeless as they owned a property in France, near Bordeaux. While it was in a state of disrepair and not habitable, they had previously stayed on site in a caravan.

And then finally, in a revelation that undermined the very heart of the story of their journey together, medical experts observed that it was extremely doubtful that Moth had suffered from corticobasal degeneration (CBD) for 18 years. The journalist had consulted nine neurologists, and this was the reported consensus. Not only were Moth’s presenting symptoms not what were expected, the normal life expectancy with the condition was tragically short at six to eight years.

Pulling the various strands of its investigation together The Observer thumps the tub about the importance of ‘truth’. It is not acceptable to be mis-sold an idea of truth where important passages of the book are invented. There are both “… sins of omission and commission”:

“The story, no doubt, has elements of truth, but it also misrepresents who they were, how they started out on their journey and the financial circumstances that provided the backdrop.”

However, life is complicated and there are always two sides to a story.

In a response posted to her website Raynor Winn answers each of the accusations in turn. Amid the storm of vitriol and threat unleashed online by the article, she protests that, “… [it] is grotesquely unfair, highly misleading and seeks to systematically pick apart my life.”

Most distressing has been how Moth has been traumatised by the suggestion his diagnosis was made up. Along with her online statement Winn has posted appropriately redacted letters from the neurologists treating Moth that confirm his diagnosis and the narrative of the book.

As for the charges of embezzlement, she does concede that there were difficulties with a former employer. Allegations were made to the police, and she was questioned about them. However, no charges were brought, and a settlement was reached that included her paying back money on a “non-admissions basis”.

“Any mistakes I made during the years in that office, I deeply regret, and I am truly sorry.” Raynor Winn

This, however, was not the failed business deal that lay behind their financial difficulties and which triggered their homelessness and the Salt Path story.

Winn reports that the property in France is an “uninhabitable ruin in a bramble patch” with its own, unrelated, back story. When they did explore selling it at the height of their difficulties, a local French agent valued it as virtually worthless and saw marketing it as pointless.

Ultimately, they chose not to declare themselves bankrupt and simply wipe out their debts. Rather, they made an agreement with their creditors for minimal repayments. The success of the book has enabled all their debts to be cleared.

Which leaves the implicit accusation of not being who they said they were, of hiding behind pseudonyms and not owning their “real”, “legal” names. She explains that the reasons Sally Ann and Tim Walker are Raynor and Moth Winn is really quite straightforward.

In the early years of their relationship she told Moth how much she disliked being called Sally Ann and would have preferred the family name, Raynor. Moth called her Ray from that point on. Winn is her maiden name. As for Moth, well his name is Timothy, get it? Friends and family use the names interchangeably, Sal/Ray, Tim/Moth.

Having read the book and seen the film earlier this summer I was particularly taken with The Salt Path. The humanity of their story, the journey they’d been on and the insights to a life well-lived that it offered.

Goodness, which one of us has never made a mistake, a bad call, or a wrong choice, “through weakness, through ignorance or through our own deliberate fault”?

When The Observer’s bombshell broke my heart fell. Moral high horses were being mounted and outrage expressed. Raynor Winn was being cancelled, literally cancelled.

She pulled out of her forthcoming Saltlines tour, which would have seen her perform readings from her books alongside the music of the Gigspanner Big Band during a string of UK dates. There were also calls for Penguin to cancel her next book, On Winter Hill, set for publication in October.

But do you know what? On reflection, after the revelations about the Salt Path story I’m liking it even more. And for exactly the same reasons I liked it before. Because it reflects back to us the life we know, the lives we live.

For a start, life is messy. Sometimes it’s even murky, full of misunderstanding, misinterpretation and constructed narratives. Goodness, which one of us has never made a mistake, a bad call, or a wrong choice, “through weakness, through ignorance or through our own deliberate fault”? Skeletons and cupboards come to mind.

Then, on the back of that, we all fashion the story of our lives. Whether it’s curating our online presence with the images we post to social media, or the anecdotes we share and the face we present to those who are part of our day-to-day lives. The pull is always towards a version that shows us in the best light.

In fact, it can even go right down to the stories we tell about ourselves, to ourselves. The interpretation of what has happened to us and why. Interpreting how much of our experience is down to what has been done to us or is the fruit of our own responsibility.

Now, I may not want to go as far as University of Sussex Professor of Neuroscience, Anil Seth, whose books, articles and Ted Talks see us living in a kind of ‘controlled hallucination’. An interpreted version of reality constructed and calibrated by our brains out of our experience. But there is no doubt in my mind that we edit our own version of reality by the stories we tell ourselves and each other.

This is how things are. This is what it means to be human. Some bits are edited in, others edited out. Some experiences we can interpret in one way, while others might view them very differently from where they stand.

When we feel the temptation to write someone off because of what they’ve done we do well to reflect on our own experience. Then we may well be grateful that we haven’t been cancelled because of our past indiscretions.  As the old saying goes, “There, but for the grace of God, go I.”

I’m reminded of how Jesus handled himself is such circumstances. When a self-righteous crowd were swiftly wanting to rush to judgement on a woman’s flawed sexual choices, Jesus encouraged those who were without fault to be the first to act. Slowly they all realised what he was saying and backeddown.

For myself, I have always found the prayer of confession to be profoundly helpful. It keeps us grounded in the reality of our own experience and should caution us about cancelling others and writing them off.

Almighty God, our heavenly Father,

we have sinned against you

and against our neighbour

in thought and word and deed,

through negligence, through weakness,

through our own deliberate fault.

We are truly sorry

and repent of all our sins.

For the sake of your Son Jesus Christ,

who died for us,

forgive us all that is past

and grant that we may serve you in newness of life

to the glory of your name.

Amen.

For our skeletons there is forgiveness.

For what lies ahead, we have the possibilities of starting over.

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief