Essay
AI - Artificial Intelligence
Culture
9 min read

Here’s why AI needs a theology of tech

As AI takes on tasks once exclusively human, we start to doubt ourselves. We need to set the balance right.

Oliver Dürr is a theologian who explores the impact of technology on humanity and the contours of a hopeful vision for the future. He is an author, speaker, podcaster and features in several documentary films.

In the style of an icon of the Council of Nicea, theologians look on as a cyborg and humanoid AI shake hands
The Council of Nicaeai, reimagined.
Nick Jones/Midjourney.ai

AI is all the rage these days. Researchers branching into natural and engineering sciences are thriving, and novel applications enter the market every week. Pop culture explores various utopian and dystopian future visions. A flood of academic papers, journalistic commentary and essays, fills out the picture.  

Algorithms are at the basis of most activities in the digital world. AI-based systems work at the interface with the analogue world, controlling self-driving cars and robots. They are transforming medical practices - predicting, preventing, diagnosing and supporting therapy. They even support decision-making in social welfare and jurisprudence. In the business sector, they are used to recruit, sell, produce and ship. Much of our infrastructure today crucially depends on algorithms. But while they foster science, research, and innovation, they also enable abuse, targeted surveillance, regulation of access to information, and even active forms of behavioural manipulation. 

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: What is there still categorically that distinguishes us from the machines we build? 

In all these areas, AI takes on tasks and functions that were once exclusive to humans. For many, the comparison and competition between humans and (algorithmically driven) machines are obvious. As these lines are written, various applications are flooding the market, characterized by their ‘generative' nature (generative AI). These algorithms, such OpenAI’s the GPT series, go further than anyone expected. Just a few years ago, it was hard to foresee that mindless computational programs could autonomously generate texts that appear meaningful, helpful, and in many ways even ‘human’ to a human conversation partner. Whether those innovations will have positive or negative consequences is still difficult to assess at this point.  

For decades, research has aimed to digitally model human capabilities - our perception, thinking, judging and action - and allow these models to operate autonomously, independent of us. The most successful applications are based on so-called deep learning, a variant of AI that works with neural networks loosely inspired by the functioning of the brain. Technically, these are multilayered networks of simple computational units that collectively encode a potentially highly complex mathematical function.  

You don’t need to understand the details to realize that, fundamentally, these are simple calculations but cleverly interconnected. Thus, deep learning algorithms can identify complex patterns in massive datasets and make predictions. Despite the apparent complexity, no magic is involved here; it is simply applied mathematics. 

Moreover, this architecture requires no ‘mental' qualities except on the part of those who design these programs and those who interpret their outputs. Nevertheless, the achievements of generative AI are astonishing. What makes them intriguing is the fact that their outputs can appear clever and creative – at least if you buy into the rhetoric. Through statistical exploration, processing, and recombination of vast amounts of training data, these systems generate entirely new texts, images and film that humans can interpret meaningfully.  

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: Is there still something categorically that distinguishes us from the machines we build? This question arises in the moral vacuum of current anthropology. 

Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve... Computers do not have problems, only unproblematic states they are in. 

The rise of AI comes at a time when we are doubting ourselves. We question our place in the universe, our evolutionary genesis, our psychological depths, and the concrete harm we cause to other humans, animals, and nature as a whole. At the same time, the boundaries between humans and animals and those between humans and machines appear increasingly fuzzy.  

Is the human mind nothing more than the sum of information processing patterns comparable to similar processes in other living beings and in machine algorithms? Enthusiastic contemporaries believe our current AI systems are already worthy of being called ‘conscious’ or even ‘personal beings.’ Traditionally, these would have been attributed to humans exclusively (and in some cases also to higher animals). Our social, political, and legal order, as well as our ethics, are fundamentally based on such distinctions.  

Nevertheless, companies such as OpenAI see in their product GPT-4 the spark of ‘artificial general intelligence,’ a form of intelligence comparable to or even surpassing humans. Of course, such statements are part of an elaborate marketing strategy. This tradition dates to John McCarthy, who coined the term “AI” and deliberately chose this over other, more appropriate, descriptions like “complex information processing” primarily because it sounded more fundable. 

Such pragmatic reasons ultimately lead to an imprecise use of ambiguous terms, such as ‘intelligence.’ If both humans and machines are indiscriminately called ‘intelligent,’ this generates confusion. Whether algorithms can sensibly be called ‘intelligent’ depends on whether this term refers to the ability to perform simple calculations, process data, the more abstract ability to solve problems, or even the insightful understanding (in the sense of Latin intellectus) that we typically attribute only to the embodied reason of humans.  

However, this nuanced view of ‘intelligence’ was given up under the auspices of the quest for an objectively scientific understanding of the subject. New approaches deliberately exclude the question of what intelligence is and limit themselves to precisely describing how these processes operate and function.  

Current deep learning algorithms have become so intricate and complex that we can’t always understand how they arrive at their results. These algorithms are transparent but not in how they reach a specific conclusion; hence, they are also referred to as black-box algorithms. Some strands in the cognitive sciences understand the human mind as a kind of software running on the hardware of the body. If that were the case, the mind could be explained through the description of brain states, just like the software on our computers.  

However, these paradigms are questionable. They cannot explain what it feels like to be a conscious person, to desire things, be abhorred by other things and to understand when something is meaningful and significant. They have no grasp on human freedom and the weight of responsibility that comes with leading a life. All of these human capacities require, among other things, an understanding of the world, that cannot be fully captured in words and that cannot be framed as a mathematical function.  

There are academic studies exploring the conception of embodied, embedded, enactive, and extended cognition, which offer a more promising direction. Such approaches explore the role of the body and the environment for intelligence and cognitive performance, incorporating insights from philosophy, psychology, biology, and robotics. These approaches think about the role our body as a living organism plays in our capacity to experience, think and live with others. AI has no need for such a living body. This is a categorical difference between human cognition and AI applications – and it is currently not foreseeable that those could be levelled (at least not with current AI architectures). Therefore, in the strictest sense, we cannot really call our algorithms ‘intelligent' unless we explicitly think of this as a metaphor. AI can only be called 'intelligent' metaphorically because these applications do not 'understand' the texts they generate, and those results do not mean anything to them. Their results are not based on genuine insight or purposes for the world in which you and I live. Rather they are generated purely based on statistical probabilities and data-based predictions. At most, they operate with the human intelligence that is buried in the underlying training data (which human beings have generated).  

However, all of this generated material has meaning and validity only for embodied humans. Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve (with, for example, the help of data-based algorithms). Computers do not have problems, only unproblematic states they are in. Therefore, algorithms appear 'intelligent' only in contexts where we solve problems through them. 

 When we do something with technology, technology always also does something to us. 

AI does not possess intrinsic intelligence and simulates it only due to human causation. Therefore, it would be more appropriate to speak of ‘extended intelligence': algorithms are not intelligent in themselves, but within the framework of human-machine systems, they represent an extension of human intelligence. Or even better would be to go back behind McCarthy and talk about 'complex information processing.’ 

Certainly, such a view is still controversial today. There are many philosophical, economic, and socio-political incentives to attribute human qualities to algorithms and, at the same time, to view humans as nothing more than biological computers. Such a view already shapes the design of our digital future in many places. Putting it bluntly, calling technology ‘intelligent’ makes money. 

What would an alternative, more holistic view of the future look like that took the makeup of humanity seriously?  

A theology of technology (Techniktheologie) tackles this question, ultimately placing it in the horizon of belief in God. However, it begins by asking how technology can be integrated into our lives in such a way that it empowers us to do what we truly want and what makes life better. Such an approach is neither for or against technology but rather sober and critical in the analytical sense. Answering those questions requires a realistic understanding of humans, technology, and their various entanglements, as well as the agreement of plural societies on the goals and values that make a good life.  

When we do something with technology, technology always also does something to us. Technology is formative, meaning it changes our experience, perception, imagination, and thus also our self-image and the future we can envision. AI is one of the best examples of this: designing AI is designing how people can interact with a system, and that means designing how they will have to adapt to it. Humans and technology cannot be truly isolated from each other. Technology is simply part of the human way of life.  

And yet, we also need to distinguish humans from technology despite all the entanglements: humans are embodied, rational, free, and endowed with incomparable dignity as images of God, capable of sharing values and articulating goals on the basis of a common (human) way of life. Even the most sophisticated deep learning applications are none of these. Only we humans live in a world where responsibility, sin, brokenness, and redemption matter. Therefore it is up to us to agree on how we want to shape the technologized future and what values should guide us on this path.  

Here is what theology can offer the development of technology. Theology addresses the question of the possible integration of technology into the horizon of a good life. Any realistic answer to this question must combine an enlightened understanding of technology with a sober view of humanity – seeing both human creative potential and their sinfulness and brokenness. Only through and with humans will our AI innovations genuinely serve the common good and, thus, a better future for all.  

 

Find out more about this topic: Assessing deep learning: a work program for the humanities in the age of artificial intelligence 

Article
Attention
Culture
5 min read

Dispatches from the battlefield of imagination

The Age of Intellect has given way to the Age of Imagination.

Theodore is author of the historical fiction series The Wanderer Chronicles.

A collage image shows a person holding their head, with a wash of warm colours over the scene.
Jr Korpa on Unsplash

Twenty years ago today, I crossed the threshold of the Christian faith. It was a baptism of fire in a more literal and mystical sense than I care to describe (or indeed would be able to). And unlike many, I really can point to a day and a time and a place.

That night, perhaps unlike CS Lewis, I was not quite “the most dejected and reluctant convert in all England.” But I was certainly the most bewildered. ‘What have I let myself in for?’ I wondered as I walked away from that church on a dark, wet January night. I was certain that in crossing that threshold I had entered a new world. Even if it was true, as I believed – or as I now knew - I sensed that it was dangerous too. There was a wildness to what I had just witnessed that was both thrilling and disconcerting. And yet, after that encounter, I could no more have turned away from what I had discovered than stop the world turning. As the mathematician Blaise Pascal discovered in his own ‘night of fire’ – “certitude, certitude!” is a very precious gift, and one worth holding on to.

Twenty years later, the landscape of faith in this country looks very different to the one in which I stumbled my way over the line. (Or through the back of the wardrobe might be a better metaphor.)

Back then, in 2005, the War on Terror was raging. If religion was discussed at all, it was generally reckoned a pretty rotten sort of institution. A regrettable historical hangover, an inheritance bequeathed to us by our more credulous ancestors of which we were doing well to divest ourselves, albeit too slowly for some. In this brave, new secular world, it was an increasingly commonplace view that religion ruined everything; beside which, it wasn’t true anyway.

These were the days when a certain form of atheism was ebullient and on the march. The Four Horsemen of Dawkins, Hitchens, Dennett and Harris held the cultural conch for a time, and they weren’t letting go. The God Delusion came out in October 2006, quickly followed by God Is Not Great in early 2007. Religion (not sin) was the root of all evil. ReasonTM was the exclusive intellectual property of the unreligious mind, untainted as it was by visions of that laughably silly Sky-Fairy in the heavens. The battlefield of apologetics was a much-contested landscape at the time. Truth was the prize - which both sides could at least agree upon - and many a debating hall was filled to bursting to watch each side’s sharpest minds slug it out.

God only knows how in such an intellectual atmosphere, I survived the shelling and carried through to the other side. But it’s telling that I had as my guide through the intellectual carnage, not voices of that age, but rather voices from further back in time. My old friend, CS Lewis, but also GK Chesterton, St Augustine, Dostoyevksy, and the potent words of the gospels to which they led me. Like wily old corporals, they saw me safe across No Man’s Land.

Even if I made it through, there’s no doubt it was the secularists who gained the cultural ground back then. That their intellectual case was unsound, it didn’t matter. Their propaganda was better – it was what people wanted to hear – and so Christianity was shoved out of the public square.

And now, two decades on, the war has moved into a very different theatre of operations. The Age of the Intellect has given way to the Age of Imagination as, unwittingly, the dry vacuum of secularism has sucked in contending spirits of another kind.

These days proponents and adversaries of the Christian faith jostle not in the dusty debating halls of our great universities, but on the battlefield of cultural consumption. Its topography formed of the movies we watch, the streaming channels we look at, the podcasts, music and media we endlessly gulp down.

Truth itself is no longer the prize, since the logical outworking of atheism’s ascendancy was to get what perhaps its proponents never bargained for: a post-truth age. What matters now is not so much what you believe, as what you attend to. The words and images which you consume. (Or which consume you.)

Walk the streets of any city and witness every passer-by glued to the screen nestled in their hand. Earphones clamped over their head. Distraction, saturation, enchantment: a cacophony of sound, a barrage of images overrunning the imagination to the point of madness. Until we have forgotten what it is like to sit patiently in silence with a still and empty mind. What it’s like to observe the world around us, to be available for the people around us.

But with what do we fill our imaginations now – that is the question? There lies the battle. 

But with what do we fill our imaginations now – that is the question? There lies the battle.

And so we find ourselves now moving through a world in which our capacity to create and consume is loaded with inestimably high stakes. It harkens back to Dostoyevsky’s famous line in The Brothers Karamazov: “The awful thing is that beauty is mysterious as well as terrible. God and the devil are fighting there and the battlefield is the heart of man.”

He’s right. Although the heart, the mind, the imagination cannot in any true sense be de-coupled from one another. (Is ‘soul’ a more encompassing word?)

 And yet, of the two, the truly subversive combatant is God and not the devil. (Consider the Cross: the most subversive act in all reality.) It is God who is the invader here after all. He is the one taking back ground. His weapons are Truth, Beauty and Goodness. On the face of it, these are mild, even benign, abstractions. And yet in each is wrapped a potency as explosive as dynamite. Because with them, the spells that hold our imaginations captive can be broken. In an unguarded moment, He can slip through the enemy lines.

Witness the ear of culture’s recent harkening to the ancient truths and wisdom of our Judeo-Christian heritage. Nick Cave sings of a “Wild God” and to everyone’s surprise, people are starting to listen again. But he’s not the only one.

The inescapable wildness of God is that He cannot be contained; if His will is to break through, then He cannot be held back. As Mr. Beaver said of the lion Aslan, in answer to the fear: “Is he safe?”

“Who said anything about safe? ’Course, he isn’t safe. But he is good.”

As little image-bearers of this Creator, indeed as little creators in our turn, our creativity teeters on a knife-edge – it always has. An edge sharp enough to cleave heaven from hell. We’d do well to remember that. And that, being image-bearers of this wild God, no wonder we have a wildness of our own.

Yep. Twenty years has already been one heck of an adventure. But I suspect it has only just begun.

​​​​​​​Join with us - Behind the Seen

Seen & Unseen is free for everyone and is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Alongside other benefits (book discounts etc.), you’ll receive an extra fortnightly email from me sharing what I’m reading and my reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief