Article
AI - Artificial Intelligence
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

Article
Character
Culture
Film & TV
5 min read

Deceit is integral to success in Destination X

Travel and trickery make for a miserable journey
A composite images show a map of Europe with Destination X contestants pictures above.
BBC.

Like me, you may have recently been watching Destination X, where 13 contestants compete to win £100,000 by guessing where the coach they are travelling on has stopped. Blocked from seeing out of the windows and given just a few clues to their locations, the contestants have to work out where they are. Similar to Traitors, it tries to give reality TV a respectability while also providing the gossipy drama that underpins the format.  

Opportunities for extra clues are possible, with contestants competing against each other to receive them. Only some of the competitors are allowed to view the extra clues. This secret knowledge quickly causes thirteen pretty nice contestants to mistrust, lie, suspect, accuse, and keep secrets. After three new players are added in, there is a clear divide between the ‘OGs’ and the rest. It reminded me of Lord of the Flies, with alliances, rivalries, and judgements of player’s usefulness taking scarily little time to flourish. 

The breaking of societal expectations to be truthful, reliable, and work for the common good is perhaps the appeal of these shows. The Judeo-Christian Ten Commandments still underpin the Western world, and lying, greed, and selfishness are all still denounced as wrong by mainstream ethics. There is an enormous amount of talk in Destination X, as there is in the Traitors, about ‘playing the game;’ legitimising breaking normal behaviour in order to win the competition. We watch on, enjoying the chance to wonder how we would manage in a world where lying, cheating, and manipulating is expected and encouraged by the rules of the game. 

The thing is, breaking these rules seems to make everybody so miserable. In the first episode, Deborah won a big clue, chose only to share it with one teammate, and was so burdened by the guilty secret that she lost the first location test and left the game immediately. In another episode, some OGs win a challenge and choose to deliberately misinform the others, including the rest of their gang. When the disinformation is revealed, and directly causes the exit of another OG, the sense of guilt as others realise the deception is plain to the viewer. Time after time, players begrudge ‘the game’ for the lies they are telling- but it is their own decision to keep the secrets to themselves. 

Perhaps the most striking thing is how quickly people lose track of the artifice of the game, and how integral to their reality their deceit has become. Towards the end of the series, as the money gets closer, the contestants harden further towards each other, and deception seems to come more easily. Perhaps this is why the guilt makes them miserable- with a little encouragement, their sense of right or wrong has disintegrated into instinct for survival. 

The people that seem to be having the best time on Destination X are Daren and Claire, perhaps the two players who are happy to trust their colleagues the most, and lie to them the least. Both of them do better in the competition than other contestants who embrace a selfish and cynical approach. 

Obviously these shows are games, and the contestants exit to their normal lives and resume being nice people. But they reveal a deeper truth that living cynically does not make a person happy. Although lying, cheating, and making the most of advantages might bring wealth, success, power, fame, and so on, living selfishly only makes a person miserable.  

People who lie or cheat may seem to get ahead, but it only poisons their heart. 

This reveals our design as humans to be communal, selfless beings. Describing the state of humanity before evil entered the world, the first verses of the book of Genesis describe a generous care between the first humans and their world. The very first books of the law in the Old Testament continually exhort God’s people to show love to their neighbour and compassion upon foreigners and the poor. 

Jesus used to have this great phrase for those who would follow his teaching for a selfless life. He said that they would inherit ‘life to the full,’ or ‘life that is truly living.’ It was his conviction that simple acts like telling the truth, desiring others to prosper, and being generous were the way to a content and satisfied life.  

But the kicker in Jesus’ teaching was not just that the person would receive a more satisfied life, but that each act would make the person more Godly. These acts stack together- to make a life of generosity rather than selfishness that nourishes our humanity- but also to form us towards being a better human. It creates a virtuous circle. A good act leads to a purer heart which leads to another good act. St Paul terms this ‘going from glory to glory’ in one of his letters encouraging a congregation to do just so. This circle deepens the contentment in the ‘life that is truly living’ that Jesus promises- living as God created humans to do reaps the relational, communal satisfaction that God intended the human experience to contain. 

It works the other way too. People who lie or cheat may seem to get ahead, but it only poisons their heart. Becoming de-sensitised to their acts, further selfishness follows. Each act separates them further from the human experience they were designed to enjoy, and dissatisfaction follows. Often this is exacerbated by more attempts to cover the feeling with selfish ambition. 

People who treat the real world like competitors treat Destination X, as a game to be won, with prizes that come at the cost of disinheriting others, may find wealth or power. But they will not find the contentment of life to the full that the way of Jesus offers and their humanity craves. 

Whilst we sit at home enjoying players’ ability to break cultural taboos and suffer the emotional consequences, we might reflect that it is better to be content than victorious- and miserable. 

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief