Podcast
Culture
S&U interviews
4 min read

My conversation with... Molly Worthen

Belle TIndall is fascinated by the intellectual fascination that drove Molly Worthen’s inquiry into faith.
A woman seated at a table gestures with both hands while talking

Can you think your way into Christianity?  

Can your mind lead the way into something that transcends understanding?  

Is it possible to ‘fake it until you make it’ when it comes to belief in God? 

These are the questions that hold our conversation with Molly Worthen together.  Molly, for those of you who aren’t yet acquainted with her work, is a journalist and associate professor of American history at the University of North Carolina at Chapel Hill. For the past decade, her intellectual sweet spot has been the religious and intellectual history of North America. Flowing from her fascinating research are books such as Apostles of Reason: The Crisis of Authority in American Evangelicalism, as well as pieces for the New York Times, The Atlantic and The New Yorker

Intellectual fascination was her gateway into faith. She used homework, deadlines, schedules and challenges as tools with which she worked out and fine-tuned her beliefs. 

In this episode of Re-Enchanting, Molly very generously walks us through her own story; from a child who would cover her ears when being read Bible stories, to a young adult who could relish the oddity of religious experience from a distance, to a journalist investigating various Christian communities, to a baptised Christian attending a mega-church. It’s quite the journey, but I shall leave it to Molly to unpack the full story, seen as she tells it with the vigour and detail of a historian.   

I find Molly’s story captivating for many reasons, the primary one being that her intellectual fascination was her gateway into faith. She used homework, deadlines, schedules and challenges as tools with which she worked out and fine-tuned her beliefs. She says herself, ‘I needed to process to be rigorous’. How interesting is that?  

Reflecting on the conversation that Justin and I had with Molly, I realise that there are three, rather distinct and yet wholly common, misconceptions about faith that she shatters. I don’t think that she was intending to, I’m not even sure that she was aware that she was doing it. But her fascinating crossing from agnostic to Christian has some interesting philosophical by-products.  

She asserted that she didn’t want to ‘convert out of cowardice’ nor was she interested in succumbing to ‘a bribe’

Firstly, the focused methodology with which Molly approached theism in general, and Christianity in particular, simply dispels the notion that a belief in God must render logic and reason redundant. On the contrary, Molly took step after considered step into her new-found set of Christian beliefs. Her story is one of measured assurance, of ‘not being 99.9 per cent’, but being ‘far north of 51 per cent’.  

Secondly, Molly challenges the assumption that faith is sought out as a method of opting-out of the harshest parts of reality. That it’s held as some kind of cosmic ‘Get Out of Jail Free’ card – the ‘jail’ being whatever un-graspable, un-controllable, un-bearable aspect of reality sits most heavily upon us. There’s a common notion that religious people have found a coping mechanism, that they’ve institutionalised their denial and spiritualised their escapism. I’ve often found that notion an interesting one, mostly because I wish that it were true. But it doesn’t quite work that way. Believing in an all-seeing, all-knowing, all-loving God does not mean that one can avoid looking directly at suffering, pretend that it isn’t there, or that it somehow doesn’t ultimately matter. On the contrary, it often requires one to look at it, and wrestle with it, for longer. Nick Cave and Sean O’Hagan’s masterful Faith, Hope and Carnage is an ode to a belief system that resides in the midst of Nick Cave’s pain, as opposed to pulling him out of it. Molly, perhaps from all of her years of research, seemed to know this. She asserted that she didn’t want to ‘convert out of cowardice’ nor was she interested in succumbing to ‘a bribe’. Surely you are convinced by now that Molly Worthen is about as fascinating as it gets? 

And finally, it was interesting to hear Molly speak of the choices, both micro and macro, that have led her to where she now finds herself. After all, faith is a choice. It reminds me of the philosopher, William James, who proposed that there are certain beliefs that can’t be evidenced until they are believed. For example, you cannot determine whether a chair will hold your weight until you sit on it believing (at least to a reasonable extent) that it can. This is partly (but profoundly) true of God; while one can ponder the empirical evidence for the existence of God for a lifetime, it is often the case that experiential evidence for God is available once you believe it. This doesn’t mean that belief must be a wholly blind choice, that would only negate my first point, but it is a choice. Again, Molly wonderfully encapsulated the tension of this notion in recalling that,  

“what was really preventing me from engaging with this evidence is my own commitment to materialism and my own deep epistemological groove. But if I’m willing to suspend that, what happens?... You can walk right up to it and get to the point where you’re still faced with a leap of faith, but it’s no longer a ten-mile leap into the dark, it’s a leap based on a pretty reasonable body of evidence. And it turns out that to reject that leap is itself and act of faith.” 

This episode of Re-Enchanting is a personal, and therefore profoundly interesting, one. We speak to Molly, not of how her field of work has been re-enchanted by the mystery and wonder of the Christian story, but how she has. And that makes this episode incredibly worth your time.  

Article
AI
Culture
Generosity
Psychology
Virtues
5 min read

AI will never codify the unruly instructions that make us human

The many exceptions to the rules are what make us human.
A desperate man wearing 18th century clothes holds candlesticks
Jean Valjean and the candlesticks, in Les Misérables.

On average, students with surnames beginning in the letters A-E get higher grades than those who come later in the alphabet. Good looking people get more favourable divorce settlements through the courts, and higher payouts for damages. Tall people are more likely to get promoted than their shorter colleagues, and judges give out harsher sentences just before lunch. It is clear that human judgement is problematically biased – sometimes with significant consequences. 

But imagine you were on the receiving end of such treatment, and wanted to appeal your overly harsh sentence, your unfair court settlement or your punitive essay grade: is Artificial Intelligence the answer? Is AI intelligent enough to review the evidence, consider the rules, ignore human vagaries, and issue an impartial, more sophisticated outcome?  

In many cases, the short answer is yes. Conveniently, AI can review 50 CVs, conduct 50 “chatbot” style interviews, and identify which candidates best fit the criteria for promotion. But is the short and convenient answer always what we want? In their recent publication, As If Human: Ethics and Artificial Intelligence, Nigel Shadbolt and Roger Hampson discuss research which shows that, if wrongly condemned to be shot by a military court but given one last appeal, most people would prefer to appeal in person to a human judge than have the facts of their case reviewed by an AI computer. Likewise, terminally ill patients indicate a preference for doctor’s opinions over computer calculations on when to withdraw life sustaining treatment, even though a computer has a higher predictive power to judge when someone’s life might be coming to an end. This preference may seem counterintuitive, but apparently the cold impartiality—and at times, the impenetrability—of machine logic might work for promotions, but fails to satisfy the desire for human dignity when it comes to matters of life and death.  

In addition, Shadbolt and Hampson make the point that AI is actually much less intelligent than many of us tend to think. An AI machine can be instructed to apply certain rules to decision making and can apply those rules even in quite complex situations, but the determination of those rules can only happen in one of two ways: either the rules must be invented or predetermined by whoever programmes the machine, or the rules must be observable to a “Large Language Model” AI when it scrapes the internet to observe common and typical aspects of human behaviour.  

The former option, deciding the rules in advance, is by no means straightforward. Humans abide by a complex web of intersecting ethical codes, often slipping seamlessly between utilitarianism (what achieves the most amount of good for the most amount of people?) virtue ethics (what makes me a good person?) and theological or deontological ideas (what does God or wider society expect me to do?) This complexity, as Shadbolt and Hampson observe, means that: 

“Contemporary intellectual discourse has not even the beginnings of an agreed universal basis for notions of good and evil, or right and wrong.”  

The solution might be option two – to ask AI to do a data scrape of human behaviour and use its superior processing power to determine if there actually is some sort of universal basis to our ethical codes, perhaps one that humanity hasn’t noticed yet. For example, you might instruct a large language model AI to find 1,000,000 instances of a particular pro-social act, such as generous giving, and from that to determine a universal set of rules for what counts as generosity. This is an experiment that has not yet been done, probably because it is unlikely to yield satisfactory results. After all, what is real generosity? Isn’t the truly generous person one who makes a generous gesture even when it is not socially appropriate to do so? The rule of real generosity is that it breaks the rules.  

Generosity is not the only human virtue which defies being codified – mercy falls at exactly the same hurdle. AI can never learn to be merciful, because showing mercy involves breaking a rule without having a different rule or sufficient cause to tell it to do so. Stealing is wrong, this is a rule we almost all learn from childhood. But in the famous opening to Les Misérables, Jean Valjean, a destitute convict, steals some silverware from Bishop Myriel who has provided him with hospitality. Valjean is soon caught by the police and faces a lifetime of imprisonment and forced labour for his crime. Yet the Bishop shows him mercy, falsely informing the police that the silverware was a gift and even adding two further candlesticks to the swag. Stealing is, objectively, still wrong, but the rule is temporarily suspended, or superseded, by the bishop’s wholly unruly act of mercy.   

Teaching his followers one day, Jesus stunned the crowd with a catalogue of unruly instructions. He said, “Give to everyone who asks of you,” and “Love your enemies” and “Do good to those who hate you.” The Gospel writers record that the crowd were amazed, astonished, even panicked! These were rules that challenged many assumptions about the “right” way to live – many of the social and religious “rules” of the day. And Jesus modelled this unruly way of life too – actively healing people on the designated day of rest, dining with social outcasts and having contact with those who had “unclean” illnesses such as leprosy. Overall, the message of Jesus was loud and clear, people matter more than rules.  

AI will never understand this, because to an AI people don’t actually exist, only rules exist. Rules can be programmed in manually or extracted from a data scrape, and one rule can be superseded by another rule, but beyond that a rule can never just be illogically or irrationally broken by a machine. Put more simply, AI can show us in a simplistic way what fairness ought to look like and can protect a judge from being punitive just because they are a bit hungry. There are many positive applications to the use of AI in overcoming humanity’s unconscious and illogical biases. But at the end of the day, only a human can look Jean Valjean in the eye and say, “Here, take these candlesticks too.”   

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief