Essay
AI - Artificial Intelligence
Culture
9 min read

Here’s why AI needs a theology of tech

As AI takes on tasks once exclusively human, we start to doubt ourselves. We need to set the balance right.

Oliver Dürr is a theologian who explores the impact of technology on humanity and the contours of a hopeful vision for the future. He is an author, speaker, podcaster and features in several documentary films.

In the style of an icon of the Council of Nicea, theologians look on as a cyborg and humanoid AI shake hands
The Council of Nicaeai, reimagined.
Nick Jones/Midjourney.ai

AI is all the rage these days. Researchers branching into natural and engineering sciences are thriving, and novel applications enter the market every week. Pop culture explores various utopian and dystopian future visions. A flood of academic papers, journalistic commentary and essays, fills out the picture.  

Algorithms are at the basis of most activities in the digital world. AI-based systems work at the interface with the analogue world, controlling self-driving cars and robots. They are transforming medical practices - predicting, preventing, diagnosing and supporting therapy. They even support decision-making in social welfare and jurisprudence. In the business sector, they are used to recruit, sell, produce and ship. Much of our infrastructure today crucially depends on algorithms. But while they foster science, research, and innovation, they also enable abuse, targeted surveillance, regulation of access to information, and even active forms of behavioural manipulation. 

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: What is there still categorically that distinguishes us from the machines we build? 

In all these areas, AI takes on tasks and functions that were once exclusive to humans. For many, the comparison and competition between humans and (algorithmically driven) machines are obvious. As these lines are written, various applications are flooding the market, characterized by their ‘generative' nature (generative AI). These algorithms, such OpenAI’s the GPT series, go further than anyone expected. Just a few years ago, it was hard to foresee that mindless computational programs could autonomously generate texts that appear meaningful, helpful, and in many ways even ‘human’ to a human conversation partner. Whether those innovations will have positive or negative consequences is still difficult to assess at this point.  

For decades, research has aimed to digitally model human capabilities - our perception, thinking, judging and action - and allow these models to operate autonomously, independent of us. The most successful applications are based on so-called deep learning, a variant of AI that works with neural networks loosely inspired by the functioning of the brain. Technically, these are multilayered networks of simple computational units that collectively encode a potentially highly complex mathematical function.  

You don’t need to understand the details to realize that, fundamentally, these are simple calculations but cleverly interconnected. Thus, deep learning algorithms can identify complex patterns in massive datasets and make predictions. Despite the apparent complexity, no magic is involved here; it is simply applied mathematics. 

Moreover, this architecture requires no ‘mental' qualities except on the part of those who design these programs and those who interpret their outputs. Nevertheless, the achievements of generative AI are astonishing. What makes them intriguing is the fact that their outputs can appear clever and creative – at least if you buy into the rhetoric. Through statistical exploration, processing, and recombination of vast amounts of training data, these systems generate entirely new texts, images and film that humans can interpret meaningfully.  

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: Is there still something categorically that distinguishes us from the machines we build? This question arises in the moral vacuum of current anthropology. 

Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve... Computers do not have problems, only unproblematic states they are in. 

The rise of AI comes at a time when we are doubting ourselves. We question our place in the universe, our evolutionary genesis, our psychological depths, and the concrete harm we cause to other humans, animals, and nature as a whole. At the same time, the boundaries between humans and animals and those between humans and machines appear increasingly fuzzy.  

Is the human mind nothing more than the sum of information processing patterns comparable to similar processes in other living beings and in machine algorithms? Enthusiastic contemporaries believe our current AI systems are already worthy of being called ‘conscious’ or even ‘personal beings.’ Traditionally, these would have been attributed to humans exclusively (and in some cases also to higher animals). Our social, political, and legal order, as well as our ethics, are fundamentally based on such distinctions.  

Nevertheless, companies such as OpenAI see in their product GPT-4 the spark of ‘artificial general intelligence,’ a form of intelligence comparable to or even surpassing humans. Of course, such statements are part of an elaborate marketing strategy. This tradition dates to John McCarthy, who coined the term “AI” and deliberately chose this over other, more appropriate, descriptions like “complex information processing” primarily because it sounded more fundable. 

Such pragmatic reasons ultimately lead to an imprecise use of ambiguous terms, such as ‘intelligence.’ If both humans and machines are indiscriminately called ‘intelligent,’ this generates confusion. Whether algorithms can sensibly be called ‘intelligent’ depends on whether this term refers to the ability to perform simple calculations, process data, the more abstract ability to solve problems, or even the insightful understanding (in the sense of Latin intellectus) that we typically attribute only to the embodied reason of humans.  

However, this nuanced view of ‘intelligence’ was given up under the auspices of the quest for an objectively scientific understanding of the subject. New approaches deliberately exclude the question of what intelligence is and limit themselves to precisely describing how these processes operate and function.  

Current deep learning algorithms have become so intricate and complex that we can’t always understand how they arrive at their results. These algorithms are transparent but not in how they reach a specific conclusion; hence, they are also referred to as black-box algorithms. Some strands in the cognitive sciences understand the human mind as a kind of software running on the hardware of the body. If that were the case, the mind could be explained through the description of brain states, just like the software on our computers.  

However, these paradigms are questionable. They cannot explain what it feels like to be a conscious person, to desire things, be abhorred by other things and to understand when something is meaningful and significant. They have no grasp on human freedom and the weight of responsibility that comes with leading a life. All of these human capacities require, among other things, an understanding of the world, that cannot be fully captured in words and that cannot be framed as a mathematical function.  

There are academic studies exploring the conception of embodied, embedded, enactive, and extended cognition, which offer a more promising direction. Such approaches explore the role of the body and the environment for intelligence and cognitive performance, incorporating insights from philosophy, psychology, biology, and robotics. These approaches think about the role our body as a living organism plays in our capacity to experience, think and live with others. AI has no need for such a living body. This is a categorical difference between human cognition and AI applications – and it is currently not foreseeable that those could be levelled (at least not with current AI architectures). Therefore, in the strictest sense, we cannot really call our algorithms ‘intelligent' unless we explicitly think of this as a metaphor. AI can only be called 'intelligent' metaphorically because these applications do not 'understand' the texts they generate, and those results do not mean anything to them. Their results are not based on genuine insight or purposes for the world in which you and I live. Rather they are generated purely based on statistical probabilities and data-based predictions. At most, they operate with the human intelligence that is buried in the underlying training data (which human beings have generated).  

However, all of this generated material has meaning and validity only for embodied humans. Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve (with, for example, the help of data-based algorithms). Computers do not have problems, only unproblematic states they are in. Therefore, algorithms appear 'intelligent' only in contexts where we solve problems through them. 

 When we do something with technology, technology always also does something to us. 

AI does not possess intrinsic intelligence and simulates it only due to human causation. Therefore, it would be more appropriate to speak of ‘extended intelligence': algorithms are not intelligent in themselves, but within the framework of human-machine systems, they represent an extension of human intelligence. Or even better would be to go back behind McCarthy and talk about 'complex information processing.’ 

Certainly, such a view is still controversial today. There are many philosophical, economic, and socio-political incentives to attribute human qualities to algorithms and, at the same time, to view humans as nothing more than biological computers. Such a view already shapes the design of our digital future in many places. Putting it bluntly, calling technology ‘intelligent’ makes money. 

What would an alternative, more holistic view of the future look like that took the makeup of humanity seriously?  

A theology of technology (Techniktheologie) tackles this question, ultimately placing it in the horizon of belief in God. However, it begins by asking how technology can be integrated into our lives in such a way that it empowers us to do what we truly want and what makes life better. Such an approach is neither for or against technology but rather sober and critical in the analytical sense. Answering those questions requires a realistic understanding of humans, technology, and their various entanglements, as well as the agreement of plural societies on the goals and values that make a good life.  

When we do something with technology, technology always also does something to us. Technology is formative, meaning it changes our experience, perception, imagination, and thus also our self-image and the future we can envision. AI is one of the best examples of this: designing AI is designing how people can interact with a system, and that means designing how they will have to adapt to it. Humans and technology cannot be truly isolated from each other. Technology is simply part of the human way of life.  

And yet, we also need to distinguish humans from technology despite all the entanglements: humans are embodied, rational, free, and endowed with incomparable dignity as images of God, capable of sharing values and articulating goals on the basis of a common (human) way of life. Even the most sophisticated deep learning applications are none of these. Only we humans live in a world where responsibility, sin, brokenness, and redemption matter. Therefore it is up to us to agree on how we want to shape the technologized future and what values should guide us on this path.  

Here is what theology can offer the development of technology. Theology addresses the question of the possible integration of technology into the horizon of a good life. Any realistic answer to this question must combine an enlightened understanding of technology with a sober view of humanity – seeing both human creative potential and their sinfulness and brokenness. Only through and with humans will our AI innovations genuinely serve the common good and, thus, a better future for all.  

 

Find out more about this topic: Assessing deep learning: a work program for the humanities in the age of artificial intelligence 

Review
Comedy
Culture
Film & TV
7 min read

When I watched Life of Brian with my teenage kids…

The universe is still not making sense.

James is a writer of sit coms for TV and radio.

A movie still shows a Roman amphitheatre, covered in body parts, over which a sign reads 'children's matinee'.
Saturday morning at the amphitheatre.
Hand Made Films.

Over the Christmas holidays, I decided it was time to watch Monty Python’s Life of Brian with my teenagers. This was not just because I found it in a charity shop on DVD for a pound, although that may have had something to do with it. And so, what if I did wrap it up and put it under the Christmas tree along with Monty Python and the Holy Grail

Let’s focus on the real question here: what was it like watching this much-loved but controversial movie from 1979 in early 2025? And what would my church-going, Bible reading, Gen Z teenagers make of it? 

This movie was not entirely new to them. I’d already shown them one of the finest sketches you will ever see, in which Brian has to learn to haggle for a beard whilst on the run. I’d also shown them ‘Romani Ire Domus’ sketch as I was teaching them Latin as part of their home education. I told them to expect more brilliant sketches like this, but that the movie is essentially “a bag of bits”. And that the ending is a disaster. More of that later. 

Here are some of their reactions: 

“Wow! This is soooooo Horrible Histories.” 

It was. And it was even more resonant when we watched Monty Python and The Holy Grail. This is not a criticism. After all, who doesn’t love Horrible Histories? Especially the first cast who went off and made a truly brilliantly funny movie you probably haven’t seen about William Shakespeare called Bill. I think we’ve seen it as a family at least eight times. But they could see the legacy of Monty Python fifty years on. 

“What’s with that bit with the space craft?” 

I don’t know. Maybe they had to find something for Terry Gilliam to do. 

“Why are you fast forwarding that bit?” 

The movie contains unnecessary and tawdry nudity. As a parent, I reserve the right to censor the movies my children watch. 

“Is that it?” 

The movie is admirably brief at 93 minutes. My kids were just startled by the fact that the movie ended, without an ending. I’d prepared for them for this. After all, Bill has a proper beginning, middle and end. (Seriously. It’s great. Watch it) My kids have watched a lot of Pixar movies which are normally honed to plot perfection (with the exception of Soul which is a plot hot mess. And, as a jazz fan, I really wanted to love that movie.) 

The ending of Life of Brian is poor, by any measure. It’s not just the fact that the crucifixion scene makes light of something savagely sad and sacred. It’s more that the movie ends with Brian abandoned to his fate on a cross while Eric Idle sings the cheerfully stoic Always Look on the Bright Side of Life while they all bake under the hot sun. And that’s it. The movie is over. 

It’s slightly better than the non-ending of Monty Python and the Holy Grail, which comes clattering to a halt after the allotted time. I read somewhere that there simply wasn’t any money to do anything else. Clearly, Life of Brian, a few years later, had a bigger budget so there was at least an attempt at an ending. But a song, even a good song, doth not an ending make. 

The song’s chirpiness belies the brilliance of it. With some neat rhymes and a simple, singable hook, the song achieves exactly what it sets out to achieve: stoic reassurance and an encouragement to put a brave face on things. It’s a funny contrast given they’re all being crucified, albeit in a comical pain-free way without nails and blood. 

We shouldn’t be surprised that this is a message coming from relatively young men who’ve had a good education, been lauded as great comedians and made a lot of money. And still have their whole lives ahead of them in 1979 (although Graham Chapman died ten years later aged 49.) The fact the Pythons have nothing to say about life, death, suffering, pain, betrayal, the universe or anything isn’t their fault. Nor should we look to such sketch comedians for profound insights about the human condition. 

How I felt 

Here's how I felt as I watched Brian grasp the absurd injustice of his fate on a cross in the closing scene: I sensed the spirit of Douglas Adams, writer of Hitchhikers Guide to the Galaxy. The first series was broadcast on BBC Radio in 1978, the year that Life of Brian was being filmed in Tunisia. Adams writes about a universe that feels like it should make sense. But it doesn’t. It feels like there should be justice. But there isn’t. Which is funny. But also a bit sad. 

The protagonist, Arthur Dent, is like Brian: a victim of circumstance, pushed from pillar to post by idiots and monsters. Ford Prefect constantly explaining the plot while Arthur Dent is dragged along, persisting with a middle-class simmering indignation that seems to last into eternity. But then, it’s a sitcom, so it’s not supposed to end. 

A movie is a different proposition. We do not need to get bogged down with talk about the ‘hero’s journey’ for long but by the end of Life of Brian, our hero is only halfway through his quest. He has crossed the threshold by joining the People’s Front of Judea. But then what? He becomes disenchanted and realises he is going to have let go of something in order to grow and move on. But he doesn’t. He’s tied to a cross, abandoned and left for dead. 

What other ending could there have been? I did have one idea. That Jesus, who is also in the movie, raises him from the dead. Brian says thank you, decides against becoming a disciple and makes a living as a cheesemaker. It’s a funny call-back, but still not satisfying, is it? 

The problem is that Brian doesn’t have any true desires deep down. He doesn’t have a quest. That’s because this movie started life as a parody of Jesus, whose story its own natural beginning, middle and surprising but satisfying end. But the Pythons found that the life of Christ is rather compelling and challenging when you take the time to read what he actually said and did, so the focus shifted. What if Brian were mistaken for a messiah? The target became a mistaken identity comedy about organised religion. 

Looking Back 

46 years later, does Life of Brian still feel like searing satire on organised religion? Not really. Brian is not mistaken for the Messiah until almost 50 minutes in. The movie is more than half over. There are religious themes and sketches before that point, such as the scene in which the blasphemer is to be stoned (by women in beards), the ex-leper beggar healed by Jesus “without so much as a ‘by your leave’!”. 

Brian only starts preaching to avoid being noticed by the soldiers. A crowd gathers and we’re into the ‘consider the lilies’ sketch, which I’ve always found funny. (And I never felt this was threatening or undermining the original version spoken by Christ himself, although I think of it every time it’s read aloud in church). 

And then, the movie turns. Once the soldiers have gone, Brian stops talking. But this leaves the small crowd on a cliffhanger. They are now hanging on his every word. As he tries to get away, they turn his gourd and sandal into relics. He runs, but is found. We get the “very naughty boy” line, Brian addresses a crowd  in the ‘you are all individuals’ sketch. Soon afterwards he’s arrested, and that’s the end of that. The religious themes fall away. It is hardly a coruscating broadside salvo on organised religion, although I understand why it might have felt like that at the time. 

Watching it now when religion has declined for a further 45 years since 1979, the blows do not really land as they may have done at the time. This places further pressure on the ending which does not deliver as it was never intended to. 

But seeing the chipper, upbeat stoicism at the end through the eyes of my kids was really interesting. They know that Disney and Pixar and now Disney Pixar have been trying to tell kids for decades that you should ‘believe in yourself’. They are rightly sceptical about messages of self-belief. So, it’s quite strange to see a movie with a religious theme end with song and a whistle and the idea that you don’t need to believe in anything at all. But that you should smile anyway. 

What a curious conclusion. The fact that it felt so strange in 2025 might suggest that the British optimism in the face of death and injustice isn’t really good enough anymore.  Maybe this will encourage us to go back to the original. After all ‘Blessed are the Cheesemakers’ is only funny if you know want what Jesus actually said at the Sermon on the Mount. Maybe a new generation will want to take the time to read what he actually said and did.

Join with us - Behind the Seen

Seen & Unseen is free for everyone and is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Alongside other benefits (book discounts etc.), you’ll receive an extra fortnightly email from me sharing what I’m reading and my reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief