Article
AI - Artificial Intelligence
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

Review
Art
Culture
Music
Romanticism
Taylor Swift
5 min read

Taylor Swift’s new album is fine, and that might be the problem

Ego, art, and the quiet tragedy of getting everything you ever wanted

Belle is the staff writer at Seen & Unseen and co-host of its Re-enchanting podcast.

Taylor Swift, dressed as a showgirl, sips from a glass.
Taylor Swift, showgirl.
Taylorswift.com

Taylor Swift released an album last week and, from what I can see, the world seems to hate it.  

Life of a Showgirl was written and recorded while Taylor was on her two-year-long Era’s tour, hence the album’s title. She would fly to Sweden between tour dates to record with the infamous producers, Max Martin and Shellback. This matters. Why? Well, because this means that each song on this album has grown out of the soil of unfathomable success; record-breaking numbers and history-making impact, it’s not an exaggeration to say that the Era’s tour shifted the landscape of popular culture. Many critics have reflected on this context, citing ‘burnout’ and ‘frazzle’ as reasons why this album sits far below Taylor’s usual standard. 

They implore Taylor to take a day off: put her feet up, recuperate, and re-gather her musical senses.  

Then there are the critics who seem to be directing blame toward Taylor’s obvious happiness. If you didn’t know, she’s engaged to American footballer, Travis Kelce – and they, as a couple, are sickly sweet. Honestly, they’re defiantly mushy. They’re cheesy to the point of protest. They’re just happy – and, apparently, therein lies the problem. I’ve heard more than one critic quote Oscar Wilde in their takedown of Swift’s latest offering: 

 ‘In this world there are only two tragedies: one is not getting what one wants, and the other is getting it’. 

This album, they say, is proof that Taylor Swift is victim to the latter kind of tragedy. She’s got everything one could ever want, and the world seems pretty agreed that her music is suffering because of it. We like to keep our artists tortured, thank you.  

For the record, I don’t hate the album. But I don’t love it either. I resonate with The Guardian’s Alexis Petridis who writes that it simply ‘floats in one ear and out the other’. There’s nothing to hate about it, which, I guess, also means there’s very little to love about it.  I’m not outraged, nor am I enamoured – and I say that gingerly, because I fear that’s the worst review of all.  

So, in some ways I’m agreeing with the general consensus – Life of a Showgirl is not Taylor Swift’s best work. I don’t, however, think that her success, nor her happiness, are quite to blame for it. I think those are slightly lazy critiques, they’re shallow scapegoats. 

I think, rather, the problem with this album is that Taylor has made herself the biggest thing within it.  

When introducing the album on Instagram, she thanked her collaborators for helping her to ‘paint this self-portrait’ – the strange thing is that this ‘self-portrait’ feels considerably less honest or authentic than her previous, more conceptual, albums.  

I’ve spent a couple of days wondering why this is and have come up with two theories.  

Firstly, we tend to be far more honest to and about ourselves when we’re able to kid ourselves into thinking that it’s not actually our own selves that we’re talking about. For example, I think of Billie Eilish’s Grammy and Academy Award-winning song – What Was I Made For? – which she wrote to accompany Greta Gerwig’s Barbie movie. In an interview, Billie explained how writing a song about a Barbie somehow allowed her the space and freedom to create the most honest, raw, and revealing song she’d ever written.  

We’re self-preserving creatures, you see.  

If we’re knowingly speaking of, writing about, painting or in any way presenting ourselves - our ego gets in the way, preferring us to offer the world a shiny, carefully constructed façade.  

Taylor, in intentionally painting a ‘self-portrait’, has unknowingly offered us less than herself.  

And, now for my second theory. Every good self-portrait is actually about something bigger than its subject; they are able to point toward something more universal than the individual reflected. I think of Frida Kahlo’s self-portraits, the way she used her hair to communicate societal expectations, or how she framed herself with wildlife, or the time she painted a necklace of thorns around her own neck – leaving an uncomfortable feeling in the pit of the beholder’s stomach as they think about the nature of pain and liberty. She painted herself, endlessly. Kahlo pointed to herself in order to point through herself – she was never the subject that she was most interested in, she was never the biggest thing in her own self-portrait.  

Like I say, the problem with Taylor Swift’s okay-ish album is simply that she is the biggest thing within it. The key ingredient it’s lacking is awe; it leaves nothing to marvel at.  

And that’s rare for Taylor.  

I’ve often written that she is a Romantic in every sense of the word; concerned with the feelings and experiences that are powerful enough to knock us off our feet: big feelings, big thoughts, big truths, big questions, big mysteries, big language. These things have always been baked into her lyrics. 

This album, in comparison, feels small. It doesn’t transcend Taylor Swift’s feelings about – well, Taylor Swift. She hasn’t quite managed to point through herself, she is the sole subject of her own self-portrait.  

And therein lies its OK-ness.  

Honestly? Therein lies all of our OK-ness. Taylor Swift may be anomalous in many things, but not in this - the presence of ego means that we’re all prone to self-portrait-ise ourselves. Left unchecked we are (or at least, we can be), what Charles Taylor calls, ‘buffered selves’; thinking of ourselves as the maker and subject of all meaning, shielded from awe and wonder.  

But the best art will never flow from those who think themselves the biggest and deepest subject. Because, quite simply, we’re not.  

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief