Article
AI - Artificial Intelligence
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

Article
AI - Artificial Intelligence
Community
Culture
Education
5 min read

Artificial Intelligence needs these school lessons to avoid a Frankenstein fail

To learn and to learn to care are inseparable

Joel Pierce is the administrator of Christ's College, University of Aberdeen. He has recently published his first book.

A cyborg like figure opens the door to a classroom.
AI in the classroom.
Nick Jones/Midjourney.ai.

Recent worries expressed by Anthropic CEO, Dario Amodei, over the welfare of his chatbot bounced around my brain as I dropped my girls off for their first days at a new primary school last month. Maybe I felt an unconscious parallel. Maybe setting my daughters adrift in the swirling energy of a schoolyard containing ten times as many pupils as their previous one gave me a twinge of sympathy for a mogul launching his billion-dollar creation into the id-infused wilds of the internet. But perhaps it was more the feeling of disjuncture, the intuition that whatever information this bot would glean from trawling the web,it was fundamentally different from what my daughters would receive from that school, an education.  

We often struggle to remember what it is to be educated, mistaking what can be assessed in a written or oral exam for knowledge. However, as Hannah Arendt observed over a half century ago, education is not primarily about accumulating a grab bag of information and skills, but rather about being nurtured into a love for the world, to have one’s desire to learn about, appreciate, and care for that world cultivated by people whom one respects and admires. As I was reminded, watching the hundreds of pupils and parents waiting for the morning bell, that sort of education only happens in places, be it at school or in the home, where children themselves feel loved and valued.  

Our attachments are inextricably linked to learning. That’s why most of us can rattle off a list of our favourite teachers and describe moments when a subject took life as we suddenly saw it through their eyes. It’s why we can call to mind the gratitude we felt when a tutor coached us through a maths problem, lab project, or piano piece which we thought we would never master. Rather than being the pouring of facts into the empty bucket of our minds, our educations are each a unique story of connection, care, failure, and growth.  

I cannot add 8+5 without recalling my first-grade teacher, the impossibly ancient Mrs Coleman, gazing benevolently over her half-moon glasses, correcting me that it was 13, not 12. When I stride across the stage of my village pantomime this December, I know memories of a pint-sized me hamming it up in my third-grade teacher’s self-penned play will flit in and out of mind. I cannot write an essay without the voice of Professor Coburn, my exacting university metaphysics instructor, asking me if I am really saying what is truthful, or am resorting to fuzzy language to paper over my lack of understanding. I have been shaped by my teachers. I find myself repaying the debts accrued to them in the way I care for students now. To learn and to learn to care are inseparable. 

But what if they weren’t? AI seems to open the vista where intelligences can simply appear, trained not by humans, but by recursive algorithms, churning through billions of calculations on rows of servers located in isolated data centres. Yes, those calculations are mostly still done on human produced data, though the insatiable need for more has eaten through most everything freely available on the web and in whatever pirated databases of books and media these companies have been able to locate, but learning from human products is not the same as learning from human beings. The situation seems wholly original, wholly unimaginable. 

Except it was imagined in a book written over two hundred years ago which, as Guillermo del Toro’s recent attempt to capture that vision reminds us, remains incredibly relevant today. Filmmakers, and from trailers I suspect Del Toro is no different here, tend to treat the story of Frankenstein as one of glamorous transgression: Dr Frankenstein as Faust, heroically testing the limits of human knowledge and human decency. But Mary Shelley’s protagonist is an altogether more pathetic character, one who creates in an extended bout of obsessive experimentation and then spends the rest of the book running from any obligation to care for the creature he has made.  

It is the creature who is the true hero of the novel and he is a tragic one precisely because his intelligence, skills, and abilities are acquired outside the realm of human connection. When happenstance allows him to furtively observe lessons given within a loving, but impoverished family, he imagines himself into that circle of growing love and knowledge. It is when he is disabused of this notion, when the family discovers him and is disgusted, when he learns that he is doomed to know, but not be known, that he turns into a monster bent on revenge. As the Milton-quoting monster reminds Frankenstein, even Adam, though born fully grown, was nurtured by his maker. Since even this was denied creature, what choice does he have but to take the role of Satan and tear down the world that birthed him? 

Are our modern maestros of AI Dr Frankensteins? Not yet. For all the talk of sentient-like responses by LLMs, avoiding talking about distressing topics for example, the best explanation of such behaviour is that they simply are mimicking their training sets which are full of humans expressing discomfort about those same topics. However, if these companies are really as serious about developing a fully sentient AGI, about achieving the so-called singularity, as much of the buzz around them suggests, then the chief difference between them and Frankenstein is one of ability rather than ambition. If eventually they are able to realise their goals and intelligences emerge, full of information, but unnurtured and unloved, how will they behave? Is there any reason to think that they will be more Adam than Satan when we are their creators? 

At the end of Shelley’s novel, an unreconstructed Frankenstein tells his tale to a polar explorer in a ship just coming free from the pack ice. The explorer is facing the choice of plunging onward in the pursuit of knowledge, glory, and, possibly, death, or heeding the call of human connections, his sister’s love, his crew’s desire to see their families. Frankenstein urges him on, appeals to all his ambitions, hoping to drown out the call of home. He fails. The ship turns homeward. Knowledge shorn of attachment, ambition that ignores obligation, these, Shelley tells us, are not worth pursuing. Will we listen to her warning? 

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief