Review
Addiction
Culture
Film & TV
6 min read

Who’s by your side?

It’s tough to watch A Good Person. Its laser focus and tenderness prompts Lauren Windle to recall her experience of addiction and recovery.

Lauren Windle is an author, journalist, presenter and public speaker.

An old man accompanies a young woman into a wood-panelled hall, both look aprehensive.
Morgan Freeman and Florence Pugh in A Good Person
Metro-Goldwyn-Mayer.

I don’t watch films about addiction. When I first got clean and sober almost nine years ago, I soaked in any piece of content I could find on drugs, drug use and recovery. At the time it was just YouTube clips of Russell Brand and the occasional memoir of a starlet who turned to cocaine before discovering yoga. After going to a 10:30am showing of Amy Winehouse documentary film Amy and bawling through the entire film, I decided to call it quits. I don’t need to see horrific stories of desperation – I’ve lived one. I am not a casual observer of addiction narratives; I’ve got skin in the game.  

In 2018 I went to see A Star Is Born thinking I was watching a rags-to-riches tale of an unlikely popstar. I quickly realised we weren’t there to witness the female protagonist’s ascent, so much as the male protagonist’s decent. I got back in my car and had to wait a quarter of an hour for the fit of hysterical tears to pass before I drove home. I had the same realisation watching A Good Person.  

Going in I knew that I had signed up to a film with Morgan Freeman and Florence Pugh. I knew that Pugh’s character Allison “had it all” before a “dramatic accident changed everything”. The ground here sounded so well-trodden that I thought I may need my wellies to navigate it. I knew that there was some element of addiction, but I envisaged a reasonably light touch depiction of a few too many nights on the sauce. 

I knew I was wrong when, about half an hour in, Allison lay on the cold bathroom floor to soothe her withdrawal from prescription opioids. She was sweating, shaking and breathless and from then on, it all felt distressingly familiar. The trajectory of her decline was too quick, too obvious, too accurate. As Allison bargained, manipulated and begged for drugs, I saw myself. As Allison looked directly into the mirror and said: ‘I hate you’ to her own glazed reflection, I saw myself. As Allison was dragged out of a stranger’s house party unable to stand up straight, I saw myself. 

The hopelessness, the false starts, empty promises and rare moments of lucidity rang so true, that I would find it hard to believe writer Zach Braff hadn’t experienced his own similar hardship. Either that or the recovering addicts they hired to consult on the project deserve a bonus of investment banker proportions.  

When Allison eventually reached out for help and asked a woman to sponsor her, the loving directness that came back was reminiscent of those I was given by my first sponsor. It was virtually word for word what I remember being told when I, nine days sober, made the same terrifying request. The experienced mentor told her: “Some beat it, some die.” And she’s right.  

Any of my friends who went to an in-patient treatment centre were told to look around because in five years a decent number of their cohort would be dead. And they were always right. Some people give up and let the tide of addiction pull them under. They feel exactly as Allison did when she told Daniel (played by Morgan Freeman): “I’m not sure I have the will.” And when she confessed in a Narcotics Anonymous meeting that: “Without [the pills] I want to die.” 

In the 2015 film Amy, the one that convinced me to stick to rom-coms, there’s a scene that stuck with me. Amy had been invited to perform at the Grammy’s but was denied a visa because of her well-documented drug use. It was arranged for her to live perform in London and it would be broadcast on big screens at the event. When the date came around she was in a stint of sobriety. She performed beautifully and won five Grammys. One of her friends burst into her dressing room to celebrate the momentous achievement but all Amy said was that it wasn’t as good without the drugs.  

 

You learn to love the cage you built around yourself and stop dreaming of more, because you are blind to anything beyond the walls you’ve created.

Getting into addiction means silencing that feeling in your Spirit that says that something isn’t right and you should go home. It’s consistently pushing through when you get a pit of your stomach urge to cut and run. Because you want the drugs, so you know you’ll have to take the chaos they’re packaged in. At some point you stop remembering that you ever felt uncomfortable, and you start to think you enjoy where you are, what you’re doing and the people you’re doing it with. You get Stockholm syndrome and life before your captor is a distant memory. You learn to love the cage you built around yourself and stop dreaming of more, because you are blind to anything beyond the walls you’ve created. You’re not happy, but what other options do you have? You could trade the misery of addiction for the misery of abstinence, but either way you’ll be miserable so you might as well do it with the drugs. 

Except, that’s not true. When we’re living our lives right, we’re living them in complete freedom. Slaves to no substance or behaviour with the freedom to say yes to what we want and, crucially, the freedom to say no. It’s the present Jesus gave us in the resurrection but so many of us, myself included, hand it back like it came with a gift receipt. 

I wish I’d known the dreams that would be realised, the friendships forged and the profound moments I would experience on the other side of those first, excruciating months of sobriety.

What I wish I could have told Amy at the Grammy’s, Allison in that NA meeting and myself when I first said the words: “I think I’m addicted”, is that there’s so much more than what you can currently see. I wish I’d known the dreams that would be realised, the friendships forged and the profound moments I would experience on the other side of those first, excruciating months of sobriety. I would have wanted to know that in time my grip would loosen, my knuckles would go from white back to their fleshy hue and I would be able to breathe again. It wouldn’t feel like a compromise or half a life or as though something was missing, but I would feel more fulfilled and alive than any drug would ever allow me. 

A Good Person demonstrates the chronic and repetitive condition of addiction with a laser sharp accuracy that, for someone with lived experience, could burn. But it’s also a tender reminder of the power of unlikely friendships forged from a mutual understanding of adversity. It made me think of the woman who scooped me up as I backed away from my first ever support group meeting and said: “You can sit next to me.” It made me grateful for the woman who mouthed “it’s going to be OK,” at me across the table as I sat there listening with tears rolling down my face. It reminded me of the awe I felt the first time I heard someone speak about the insomnia, shame and self-hatred of drug addiction, and I realised I wasn’t the only one. The film showed the transformative effect of consistent community in a way that I hope encourages people to turn up to one of those meetings like Allison and I did. I pray that it is the turning point in many people’s lives.  

Should you go and watch it? Absolutely. Just don’t ask me to go with you. 

Article
AI
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality.