Essay
AI - Artificial Intelligence
Culture
Identity
8 min read

Roll on AI, you'll make us more human

I’m not necessarily stupidly optimistic about AI, but there’s a tentative case to be so.

Daniel is an advertising strategist turned vicar-in-training.

An AI-generated image of a man folding a paper plan in a relaxed lounger, around him are creative tools and screens giving status updates are visible.
Nick Jones/Midjourney.ai.

I still come across people who insist that there are simply things that AI (artificial intelligence) can’t and will never be able to do. Humans will always have an edge. They tend to be journalists or editors who will insist that ChatGPT’s got nothing on their persuasive intentionality and honed command of nuance, wit, and word play. Of course, machines can replace the humans at supermarket check-out tills but not them. What they do is far too complex and requires such emotional precision and incisive insight into the audience psyche. Okay then. I nod, rolling my eyes into the back of my head.  

At this point, it’s just naive to put a limiter on the capabilities of what AI can do. It’s not even been two years since ChatGPT3 was released into the wild and started this whole furore. It’s only been 18 months and OpenAI have just launched ChatGPT4 which can produce a whole persona who can listen, look, and talk back in such a natural and convincing voice that it may as well be a scene from the 2013 film, Her. A future where Joachim Phoenix falls in love with the sultry AI voice of Scarlet Johannsson doesn’t seem too far off. We have been terrible at predicting the speed at which generative AI has developed. AI video generation was one of the clearest examples of that in the last year. In 2023, we were lauding it over the AI models for generating this surreal, nightmarish scene of Will Smith eating spaghetti. “Silly AI! aren’t you cute.” we said. We swallowed our words earlier this year, when Open AI came out with Sora, their video generation model, which spat out photorealistic film trailers that would feel at home on the screens of Cannes.  

There might be limits, but that ‘might’ gets smaller and smaller every single month, and we’re probably better off presuming that there is no ‘might’. We’ll be in for less surprises if we live from the presumption that there will be AIs that will make better newspaper editors, diagnostic radiologists, children’s book writers, and art-directors than most, if not all, humans.  

With the mass reproduction and generation capabilities of AI, we may recognise that we crave the human touch not because it’s better but because it’s human

I promised you a “stupidly optimising” take on this. So far, I’ve given you nothing but the bleak dystopian future where the labour market collapses and humans are dispossessed of all our technical, editorial, and creative skills. Where’s the good news?  

Well, the stupidly optimistic take is this: the dispossession of all our human faculties by AI will force us to embrace the truest and most fundamental core of what makes us valuable - nothing other than simply our humanity. The value of humanity goes up if we presume that everything can be done better by AI.  

In 1936, the German art critic, Walter Benjamin, prophesied the apocalyptic collapse of the art market in the essay: The Work of Art in the Age of Mechanical Reproduction. It was at a time when photographic reproduction of paintings was becoming a mainstream technique and visitors to a gallery could buy a print of their favourite painting. He argued that the mass reproduction of paintings would devalue the original painting by stripping away the aura of work - its unique presence in space and cultural heritage; the je ne sais quoi of art that draws us to a place of encounter with it. Benjamin would gawp at the digital age where masterpieces would be reduced to default iPhone background screens, but he would also be surprised by the exponentially greater value the art market has placed on the original piece. The aura of the original is sought after, all the more, precisely because mechanical reproduction has become so cheap. Why? Because in a world of mass reproduction, we crave human authenticity and connection. With the mass reproduction and generation capabilities of AI, we may recognise that we crave the human touch not because it’s better but because it’s human. And for no other reason.   

We continually place our identities in whatever talents we think make us uniquely worthwhile and value-creating for the world. 

What are we to make of the AI trials happening in the NHS which spot cancer at rates significantly higher than any human doctor. The Royal College of Radiologists insists that “There is no question that real-life clinical radiologists are essential and irreplaceable”. But really? Apart from checking the AI’s work, what’s the “essential” and “irreplaceable” part? Well, it’s the human part. Somebody must deliver the bad news to the patient and that sure as hell shouldn’t be an AI. Even if an AI could emulate the trembling voice and calming tone of the most empathic consultant, it is the human-to-human interpersonal exchange that creates the space for grief, sorrow, and shock.   

Think utopian with me for a moment. (I know, very counter-intuitive for us). In a society where all our technical skills are superseded, the most valuable skills that a human could possess might be the interpersonal ones. Empathy, compassion, intentionality, love even! The midwife who can hold the hand of a suffering first-time mother could be a more respected member of society than the editor of an edgy magazine or newspaper. As they should be! That’s a tantalising and stupidly optimistic vision of an AI future, but it’s a vision that aligns with what we know to be the true about ourselves. In our personal and spiritual lives, we already recognise that the most valuable aspects of our lives are our human relationships and the state of our inner selves. People on their death beds reflect on what kind of person they’ve been and reach out for the hands of their loved ones - not for their Q4, 2011 balance sheet. Our identities are shaped most deeply by our relationships and our character, and yet, we continually place our identities in whatever talents we think make us uniquely worthwhile and value-creating for the world. It’s good to create value, it’s nice to be good at something, and it’s meaningful to leave a lasting impact, but it is delusional to think that those things make us valuable. Our dispossession by AI might be the dispelling of these delusions! 

In a few decades, there may be nothing that humans can do better than AI, other than simply being human in the world

At least on a philosophical and spiritual level, being stripped of our human exceptionalism might be the most liberating experience for a society that has devalued and instrumentalised humanity to being glorified calculators. Being dispossessed is the truest thing about all of us. We are all being dispossessed daily by the slow march of time. The truest thing about us is that we will, one day, be wholly dispossessed by death itself. That was Heidegger’s fundamental insight into the human condition and this feeling of dispossession is the root of our anxiety and fear in the world. This might be part of the anxiety and ick we feel towards AI. Being dispossessed of our creativity and technical ability is a kind of violence and death against ourselves which we rage against. We can rage against it politically, socially, and economically, but there might be something helpful about resisting the rage from a psychological and spiritual point of view. Experiencing this dispossession might be the key to unlocking an authentic human existence in a world that we can’t control.  

I believe in human creativity. I believe that what we make is valuable. I believe in the mesmerising aura of art, cinema, music, and every other beautiful thing that we get up to in the world. I believe in the unique connection between artist and audience and the power of blood, sweat, and tears. I believe in the beautiful and tortuous self-violence of creativity to make something that will make my heart tremble and transport me to places never imagined. I believe in the intuitions of an editor to make the cut at precisely the right moment that suspends the tension and has me gripping the seat. I believe in the bedroom teenagers recording their first demos on Garageband, or the gospel choir taking their congregations to heaven and back. Now, more than ever, I believe in these miracles.  

But my belief is not anchored in any unique technical excellence, or some hubris about our exceptionalist mastery of craft. It is rooted in the profound humanity of it all, which radiates, however dimly, with the image of the divine. Writing poetry, humming a new melody, baking a cake or, even discovering a new mathematical conjecture can feel like “divine inspiration” as the leading mathematician, Thomas Fink, asserts. Or as the Romantic German theologian, Schleiermacher, so rhapsodically expressed, it can feel like the soul being “ignited from an ethereal fire, and the magic thunder of a charmed speech’" from above. This transcendent human experience is something that AI can’t usurp or supersede.  

In a few decades, there may be nothing that humans can do better than AI, other than simply being human in the world. However, Once we are stripped of everything, we won’t find ourselves naked in the dark, or at least, we don’t have to. We can stand before the world and God with the works of our hands - finite, flawed, and dispossessed - and yet, inestimably valuable and worthwhile for the simple fact of our mere humanity. 

 

*This article was something of a thought experiment. It’s far more natural to take a sandwich-board, bullhorn-wielding apocalyptic take on the rise of AI. The powers-to-be at Microsoft and OpenAI have their own ideological agendas, and it’s not unlikely that in this technological cycle, we’ll live through a profoundly destabilising labour market. We are right to fear the consolidation of wealth to supreme tech feudal lords with their companies of AI employees who cost a fraction of real humans. Civilisational collapse! What I wanted to suggest here is that there might be a unique spiritual and philosophical opportunity afforded to us as we continue to experience the break-neck development of AI and its encroachment into everything we once held as uniquely human skills.*  

 

Help share Seen & Unseen

"Seen & Unseen is a liberating point of view which has opened my mind to new possibilities."

All our content is free for anyone who wants to read it. This is made possible by our amazing community of regular supporters.

Article
Belief
Creed
Identity
Truth and Trust
1 min read

Calls to revive the Enlightenment ignore its own illusions

Returning to the Age of Reason won’t save us from post-Truth

Alister McGrath retired as Andreas Idreos Professor of Science and Religion at Oxford University in 2022.

In the style of a Raeburn portrait, a set of young people lounge around on their phones looking diffident
Enlightened disagreement (with apologies to Henry Raeburn).
Nick Jones/Midjourney.ai.

Is truth dead? Are we living in a post-truth era where forcefully asserted opinions overshadow evidence-based public truths that once commanded widespread respect and agreement? Many people are deeply concerned about the rise of irrational beliefs, particularly those connected to identity politics, which have gained considerable influence in recent years. It seems we now inhabit a culture where emotional truths take precedence, while factual truths are relegated to a secondary status. Challenging someone’s beliefs is often portrayed as abusive, or even as a hate crime. Is it any surprise that irrationality and fantasy thrive when open debate and discussion are so easily shut down? So, what has gone wrong—and what can we do to address it? 

We live in an era marked by cultural confusion and uncertainty, where a multitude of worldviews, opinions, and prejudices vie for our attention and loyalty. Many people feel overwhelmed and unsettled by this turmoil, often seeking comfort in earlier modes of thinking—such as the clear-cut universal certainties of the eighteenth-century “Age of Reason.” In a recent op-ed in The Times, James Marriott advocates for a return to this kind of rational thought. I share his frustration with the chaos in our culture and the widespread hesitation to challenge powerful irrationalities and absurdities out of fear of being canceled or marginalized. However, I am not convinced that his proposed solution is the right one. We cannot simply revert to the eighteenth century. Allow me to explain my concerns. 

What were once considered simple, universal certainties are now viewed by scholars as contested, ethnocentric opinions. These ideas gained prominence not because of their intellectual merit, but due to the economic, political, and cultural power of dominant cultures. “Rationality” does not refer to a single, universal, and correct way of thinking that exists independently of our cultural and historical context. Instead, global culture has always been a bricolage of multiple rationalities. 

The great voyages of navigation of the early seventeenth century made it clear that African and Asian understandings of morality and rationality differed greatly from those in England. These accounts should have challenged the emerging English philosophical belief in a universal human rationality. However, rather than recognizing a diverse spectrum of human rationalities—each shaped by its own unique cultural evolution—Western observers dismissed these perspectives as “primitive” or “savage” modes of reasoning that needed to be replaced by modern Western thought. This led to forms of intellectual colonialism, founded on the questionable assumption that imposing English rational philosophies was a civilizing mission intended to improve the world. 

Although Western intellectual colonialism was often driven by benign intentions, its consequences were destructive. The increasing influence of Charles Darwin’s theory of biological and cultural evolution in the late nineteenth century led Darwin’s colleague, Alfred Russel Wallace, to conclude that intellectually and morally superior Westerners would “displace the lower and more degraded races,” such as “the Tasmanian, Australian and New Zealander”—a process he believed would ultimately benefit humanity as a whole. 

We can now acknowledge the darker aspects of the British “Age of Reason”: it presumed to possess a definitive set of universal rational principles, which it then imposed on so-called “primitive” societies, such as its colonies in the south Pacific. This reflected an ethnocentric illusion that treated distinctly Western beliefs as if they were universal truths. 

A second challenge to the idea of returning to the rational simplicities of the “Age of Reason” is that its thinkers struggled to agree on what it meant to be “rational.” This insight is often attributed to the philosopher Alasdair MacIntyre, who argued that the Enlightenment’s legacy was the establishment of an ideal of rational justification that ultimately proved unattainable. As a result, philosophy relies on commitments whose truth cannot be definitively proven and must instead be defended on the basis of assumptions that carry weight for some, but not for all. 

We have clearly moved beyond the so-called rational certainties of the “Age of Reason,” entering a landscape characterized by multiple rationalities, each reasonable in its own unique way. This shift has led to a significant reevaluation of the rationality of belief in God. Recently, Australian atheist philosopher Graham Oppy has argued that atheism, agnosticism, and theism should all be regarded as “rationally permissible” based on the evidence and the rational arguments supporting each position. Although Oppy personally favours atheism, he does not expect all “sufficiently thoughtful, intelligent, and well-informed people” to share his view. He acknowledges that the evidence available is insufficient to compel a definitive conclusion on these issues. All three can claim to be reasonable beliefs. 

The British philosopher Bertrand Russell contended that we must learn to accept a certain level of uncertainty regarding the beliefs that really matter to us, such as the meaning of life. Russell’s perspective on philosophy provides a valuable counterbalance to the excesses of Enlightenment rationalism: “To teach how to live without certainty, and yet without being paralyzed by hesitation, is perhaps the chief thing that philosophy, in our age, can still do for those who study it.” 

Certainly, we must test everything and hold fast to what is good, as St Paul advised. It seems to me that it is essential to restore the role of evidence-based critical reasoning in Western culture. However, simply returning to the Enlightenment is not a practical solution. A more effective approach might be to gently challenge the notion, widespread in some parts of our society, that disagreement equates to hatred. We clearly need to develop ways of modelling a respectful and constructive disagreement, in which ideas can be debated and examined without diminishing the value and integrity of those who hold them. This is no easy task—yet we need to find a way of doing this if we are to avoid fragmentation into cultural tribes, and lose any sense of a “public good.” 

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief