Essay
AI
Culture
9 min read

Here’s why AI needs a theology of tech

As AI takes on tasks once exclusively human, we start to doubt ourselves. We need to set the balance right.

Oliver Dürr is a theologian who explores the impact of technology on humanity and the contours of a hopeful vision for the future. He is an author, speaker, podcaster and features in several documentary films.

In the style of an icon of the Council of Nicea, theologians look on as a cyborg and humanoid AI shake hands
The Council of Nicaeai, reimagined.
Nick Jones/Midjourney.ai

AI is all the rage these days. Researchers branching into natural and engineering sciences are thriving, and novel applications enter the market every week. Pop culture explores various utopian and dystopian future visions. A flood of academic papers, journalistic commentary and essays, fills out the picture.  

Algorithms are at the basis of most activities in the digital world. AI-based systems work at the interface with the analogue world, controlling self-driving cars and robots. They are transforming medical practices - predicting, preventing, diagnosing and supporting therapy. They even support decision-making in social welfare and jurisprudence. In the business sector, they are used to recruit, sell, produce and ship. Much of our infrastructure today crucially depends on algorithms. But while they foster science, research, and innovation, they also enable abuse, targeted surveillance, regulation of access to information, and even active forms of behavioural manipulation. 

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: What is there still categorically that distinguishes us from the machines we build? 

In all these areas, AI takes on tasks and functions that were once exclusive to humans. For many, the comparison and competition between humans and (algorithmically driven) machines are obvious. As these lines are written, various applications are flooding the market, characterized by their ‘generative' nature (generative AI). These algorithms, such OpenAI’s the GPT series, go further than anyone expected. Just a few years ago, it was hard to foresee that mindless computational programs could autonomously generate texts that appear meaningful, helpful, and in many ways even ‘human’ to a human conversation partner. Whether those innovations will have positive or negative consequences is still difficult to assess at this point.  

For decades, research has aimed to digitally model human capabilities - our perception, thinking, judging and action - and allow these models to operate autonomously, independent of us. The most successful applications are based on so-called deep learning, a variant of AI that works with neural networks loosely inspired by the functioning of the brain. Technically, these are multilayered networks of simple computational units that collectively encode a potentially highly complex mathematical function.  

You don’t need to understand the details to realize that, fundamentally, these are simple calculations but cleverly interconnected. Thus, deep learning algorithms can identify complex patterns in massive datasets and make predictions. Despite the apparent complexity, no magic is involved here; it is simply applied mathematics. 

Moreover, this architecture requires no ‘mental' qualities except on the part of those who design these programs and those who interpret their outputs. Nevertheless, the achievements of generative AI are astonishing. What makes them intriguing is the fact that their outputs can appear clever and creative – at least if you buy into the rhetoric. Through statistical exploration, processing, and recombination of vast amounts of training data, these systems generate entirely new texts, images and film that humans can interpret meaningfully.  

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: Is there still something categorically that distinguishes us from the machines we build? This question arises in the moral vacuum of current anthropology. 

Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve... Computers do not have problems, only unproblematic states they are in. 

The rise of AI comes at a time when we are doubting ourselves. We question our place in the universe, our evolutionary genesis, our psychological depths, and the concrete harm we cause to other humans, animals, and nature as a whole. At the same time, the boundaries between humans and animals and those between humans and machines appear increasingly fuzzy.  

Is the human mind nothing more than the sum of information processing patterns comparable to similar processes in other living beings and in machine algorithms? Enthusiastic contemporaries believe our current AI systems are already worthy of being called ‘conscious’ or even ‘personal beings.’ Traditionally, these would have been attributed to humans exclusively (and in some cases also to higher animals). Our social, political, and legal order, as well as our ethics, are fundamentally based on such distinctions.  

Nevertheless, companies such as OpenAI see in their product GPT-4 the spark of ‘artificial general intelligence,’ a form of intelligence comparable to or even surpassing humans. Of course, such statements are part of an elaborate marketing strategy. This tradition dates to John McCarthy, who coined the term “AI” and deliberately chose this over other, more appropriate, descriptions like “complex information processing” primarily because it sounded more fundable. 

Such pragmatic reasons ultimately lead to an imprecise use of ambiguous terms, such as ‘intelligence.’ If both humans and machines are indiscriminately called ‘intelligent,’ this generates confusion. Whether algorithms can sensibly be called ‘intelligent’ depends on whether this term refers to the ability to perform simple calculations, process data, the more abstract ability to solve problems, or even the insightful understanding (in the sense of Latin intellectus) that we typically attribute only to the embodied reason of humans.  

However, this nuanced view of ‘intelligence’ was given up under the auspices of the quest for an objectively scientific understanding of the subject. New approaches deliberately exclude the question of what intelligence is and limit themselves to precisely describing how these processes operate and function.  

Current deep learning algorithms have become so intricate and complex that we can’t always understand how they arrive at their results. These algorithms are transparent but not in how they reach a specific conclusion; hence, they are also referred to as black-box algorithms. Some strands in the cognitive sciences understand the human mind as a kind of software running on the hardware of the body. If that were the case, the mind could be explained through the description of brain states, just like the software on our computers.  

However, these paradigms are questionable. They cannot explain what it feels like to be a conscious person, to desire things, be abhorred by other things and to understand when something is meaningful and significant. They have no grasp on human freedom and the weight of responsibility that comes with leading a life. All of these human capacities require, among other things, an understanding of the world, that cannot be fully captured in words and that cannot be framed as a mathematical function.  

There are academic studies exploring the conception of embodied, embedded, enactive, and extended cognition, which offer a more promising direction. Such approaches explore the role of the body and the environment for intelligence and cognitive performance, incorporating insights from philosophy, psychology, biology, and robotics. These approaches think about the role our body as a living organism plays in our capacity to experience, think and live with others. AI has no need for such a living body. This is a categorical difference between human cognition and AI applications – and it is currently not foreseeable that those could be levelled (at least not with current AI architectures). Therefore, in the strictest sense, we cannot really call our algorithms ‘intelligent' unless we explicitly think of this as a metaphor. AI can only be called 'intelligent' metaphorically because these applications do not 'understand' the texts they generate, and those results do not mean anything to them. Their results are not based on genuine insight or purposes for the world in which you and I live. Rather they are generated purely based on statistical probabilities and data-based predictions. At most, they operate with the human intelligence that is buried in the underlying training data (which human beings have generated).  

However, all of this generated material has meaning and validity only for embodied humans. Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve (with, for example, the help of data-based algorithms). Computers do not have problems, only unproblematic states they are in. Therefore, algorithms appear 'intelligent' only in contexts where we solve problems through them. 

 When we do something with technology, technology always also does something to us. 

AI does not possess intrinsic intelligence and simulates it only due to human causation. Therefore, it would be more appropriate to speak of ‘extended intelligence': algorithms are not intelligent in themselves, but within the framework of human-machine systems, they represent an extension of human intelligence. Or even better would be to go back behind McCarthy and talk about 'complex information processing.’ 

Certainly, such a view is still controversial today. There are many philosophical, economic, and socio-political incentives to attribute human qualities to algorithms and, at the same time, to view humans as nothing more than biological computers. Such a view already shapes the design of our digital future in many places. Putting it bluntly, calling technology ‘intelligent’ makes money. 

What would an alternative, more holistic view of the future look like that took the makeup of humanity seriously?  

A theology of technology (Techniktheologie) tackles this question, ultimately placing it in the horizon of belief in God. However, it begins by asking how technology can be integrated into our lives in such a way that it empowers us to do what we truly want and what makes life better. Such an approach is neither for or against technology but rather sober and critical in the analytical sense. Answering those questions requires a realistic understanding of humans, technology, and their various entanglements, as well as the agreement of plural societies on the goals and values that make a good life.  

When we do something with technology, technology always also does something to us. Technology is formative, meaning it changes our experience, perception, imagination, and thus also our self-image and the future we can envision. AI is one of the best examples of this: designing AI is designing how people can interact with a system, and that means designing how they will have to adapt to it. Humans and technology cannot be truly isolated from each other. Technology is simply part of the human way of life.  

And yet, we also need to distinguish humans from technology despite all the entanglements: humans are embodied, rational, free, and endowed with incomparable dignity as images of God, capable of sharing values and articulating goals on the basis of a common (human) way of life. Even the most sophisticated deep learning applications are none of these. Only we humans live in a world where responsibility, sin, brokenness, and redemption matter. Therefore it is up to us to agree on how we want to shape the technologized future and what values should guide us on this path.  

Here is what theology can offer the development of technology. Theology addresses the question of the possible integration of technology into the horizon of a good life. Any realistic answer to this question must combine an enlightened understanding of technology with a sober view of humanity – seeing both human creative potential and their sinfulness and brokenness. Only through and with humans will our AI innovations genuinely serve the common good and, thus, a better future for all.  

 

Find out more about this topic: Assessing deep learning: a work program for the humanities in the age of artificial intelligence 

Essay
Culture
Doubt
Music
Psychology
9 min read

What happens when perfect plans are outsmarted by the world?

There may be delight hiding in the doom.
Two people sit and stand next to a grand piano on a stage.
Striking the wrong note.
Polyfilm.

If I’ve learned anything at all from decades working with businesses, it’s that they love an acronym. For a while the acronym we loved was VUCA. Not a nuclear jet nor a foot wart, VUCA emerged from the leadership theories of Warren Bennis and Burt Nanus to reflect the Volatility, Uncertainty, Complexity and Ambiguity of contemporary leadership. Nothing gets a roomful of executives nodding sagely than the observation that we live in a VUCA world. For a while it felt almost sacrilegious not to evoke VUCA at some point when training leaders. It was comforting to tell people who were supposed to be shaping the world that everything was, well… a bit nuts. 

But in the last few years VUCA has lost its shine. Things have started to get too crazy, a bit too VUCA for anyone’s liking. The wars, the plagues, the natural disasters, the political upheaval, the shaking of old certainties- it’s all gone a bit super-VUCA. The acronym that once reassured us that the world tends to resist our perfect plans has been outsmarted by the world it once captured. What are we to call this permacrisis, this omnishambles, this SNAFU, when super-mega-hyper-VUCA just sounds stupid? A new acronym was needed. Enter stage left- BANI, the invention of futurologist Jamias Cascio to designate the way things are now: Brittle, Anxious, Non-Linear, Incomprehensible. We’ve had a romantic breakup with the world- you’re not like it used to be, you used to be fun, you’ve changed!  

In March, Seen & Unseen celebrates its second anniversary. We are two years old. Old enough to appreciate a birthday cake, too young not to burn our fingers on the candles. I’ve been writing for the site since the beginning and to this day feel surprised that this quirky mishmash of a brainfart I keep writing is still accepted for publication each month. Either the folks at Seen& Unseen are pathologically kind to their own detriment, or my monthly missive of misery is not quite as off the wall as I fear it might be.  

When I look at the world, I feel like we’re in a football match with no referee. I keep shouting foul and looking for someone to blow the whistle. It feels like the Tower of Babel. Even the technologies we thought would unify us have made us incomprehensible to one another. Like the scene in That Hideous Strength (the third book in C.S. Lewis’ Cosmic Trilogy) where a roomful of people is magically befuddled. They can no longer understand each other, and anyone who rises to take charge of the situation speaks gibberish that only adds volume to the babble. We don’t need any more opinions. We certainly don’t need any more people with misplaced certainty they have the answer. 

To be honest, I’ve just run out of ideas. I’m confused, baffled, clueless. But what embarrasses me most is not my helplessness, it’s my hope. For some reason, in jarring contrast to the circumstances, I can’t shake off the sense that ultimately all this will make sense, that breakdowns lead to breakthroughs. We’re in the unbearable part of the story where everything goes wrong, but if we put the book down now, we’ll think that was the end of it, when it was really just the set up. Pretty much everything I’ve written for Seen & Unseen over the last two years equates to: grief, this looks bad, but maybe there is more to it than it appears. 

There is another anniversary being celebrated this year. This January marked the fiftieth year of a musical event so remarkable that a new dramatization of it premiered at the Berlin Film Festival to mark the occasion – the recording of The Köln Concert. (Watch the trailer of Köln 75.) If we are looking for a story of how beauty emerges from disaster, this one is worth telling. The event was organised by eighteen-year-old Vera Brandes, at that time the youngest concert promoter in Germany. She booked the Cologne Opera House, but given that it was a jazz concert, it was scheduled to begin at 11:30pm following an opera performance earlier that evening.  

The performer, jazz pianist, Keith Garrett travelled to the concert from Zurich. But rather than flying, he sold his ticket for cash and opted to make the 350-mile trip north with his producer Manfred Eicher in a Renault 4. He had not slept well for several nights and arrived late afternoon in pain, wearing a back brace, only to discover that the opera house had messed up. The Bösendorfer 290 Imperial concert grand piano he had requested had been replaced by a much smaller Bösendorfer baby grand the staff had found backstage. The piano was intended for rehearsals only, in poor condition, out of tune, with broken keys and pedals. It was unplayable. Jarrett tried it briefly and refused to perform. But Vera Brandes had sold 1,400 tickets for the evening. So, while he headed out to eat, she promised to get him the piano he required. 

But it was not to be. The piano tuner who arrived to fix the baby grand tells her a replacement is impossible. It was January in Northern Germany, the weather was wet and cold, and any grand piano transported in those conditions without specialist equipment would be damaged irreparably. They had to stick with the piano they had. Keith Jarrett’s meal didn’t go well either. There was a mix up at the restaurant and their food arrived late. They barely had chance to eat anything before returning to the venue. And when Garratt saw the tiny defective Bösendorfer still on the stage, he again refused to play, only changing his mind because Eicher’s sound-engineers were set up to record.  

So the concert begins. A reluctant pianist – tired, hungry and in pain – sits at a ruined piano, and records the bestselling piano solo album and bestselling jazz album. Ever. He improvises for over an hour. Starting tentatively, exploring the contours, befriending the limitations of his damaged instrument – learning its capabilities as he plays. But soon Jarrett is whooping, yelling and humming with delight as he extracts beauty from the brokenness. The limited register forces him to play differently. The disconnected pedals become percussion. By the time he reaches the encore, the joy of his playing is irrepressible – it sends shivers down the spine. And when he finishes, the applause goes on. Forever.  

Jarrett pulled off an impossible feat and sealed his reputation as one of the greatest pianists of his generation. And I take heart from the event, because when I face the world, I sometimes imagine I feel like he did facing that piano. Tired and pained and doubtful any good will come of playing. Can I order a new world, please? One more to my liking. One less likely to hurt. Yet I can’t quite shake off the intuition that there may be delight hiding in the doom, a treasure only unearthed by those willing to play. 

I am drawn to Job. He is a hero to all those who are sick of the answers of others but have no answers themselves. 

This year I celebrate my own anniversary. I was born seven months after that fateful night in Cologne, in the equally salubrious town of Birkenhead. This is my fiftieth year too. The 3:15pm of life: too early to clock off, too late to start anything new. If living is a race between maturity and senility – gaining the wisdom to live before losing our marbles – then I’m odds-on for a photo finish. The evidence accumulates daily that I am likely to live longer than most of my vocabulary.  

Jung held a positive view of old age. He viewed it as the time for religion to ripen. And I can’t help agreeing with him. The older I get the closer God seems. As muscle mass thins the spirit deepens. Outwardly I’m fading away, inwardly I am being renewed day by day. This undoubtedly underlies my hope of beauty arising from our brokenness. In some small and barely noticeable way it is already happening in me. And I know I’m not alone in that.  

Jung also wrote about Job- the Hebrew epic of suffering and restoration. Job’s life is like one of those old blues songs. He loses his wife, his kids, his home, his health. He’s left broken, infested with sores and sitting in the dust. If you’ve been in a situation like that, you’ll know that even the most well-meaning friends can respond with surprising incompetence. Job’s friends are no different. They are true believers in Just-World Theory, the universal human tendency to assume that if bad things happen to us we must deserve them, we must have been bad. They live in a world ultimately governed by the kind of instant karma that causes car crashes on YouTube, and they’re keen to teach Job the way the world really is.  

But Job resists them at every turn. He may have a proverbial reputation for patience, but he is anything but patient. I used to think this was a story about a man defending his innocence, but it’s much more than that. It’s the story of a man who goes through a breakup with God. He once lived a life of goodness, abundance, and gratitude in which he knew God as attentive and lovingly present. His friends are not just arguing that he’s being punished for some undisclosed sin, but that he’d always been wrong about God. He’d never known God- not really. The God they knew was volatile, capricious, arbitrary, vicious - like a rescue dog, you never quite knew when he would turn. And Job’s suffering was the proof of it. 

The problem for Job is that he has no clue why he is suffering, but he will not let his friends obliterate the history he has shared with heaven. He knows God to be utterly faithful, constantly present, sublimely attuned, hugging the contours of his life as the sea hugs the shore. He wants nothing to do with a fickle god who falls asleep on the job or flounces off the first time we let him down. He rejects the here-again gone-again god of his friends. Sometimes, to know God, we need to reject those who claim to speak for God.  

The weird thing in Job’s story is that eventually God shows up. Over the course of the narrative, he has asked God 122 questions, and God responds with 61 of his own. The questions are rhetorical- they point to all the places God is present that Job isn’t, all the things that God knows that Job doesn’t, all the things God has done that Job hasn’t. And by the end, Job is satisfied, his friends are dismissed, and his life is restored. God is as Job expected, intimately present but ultimately mysterious. He was right to reject the obtuse certainties of his friends and face the pain of the world with a cultivated sense of unknowing. 

When I ponder how best to bring beauty out of a BANI world, how best to play its brokenness like Jarret played his Bösendorfer, I am drawn to Job. He is a hero to all those who are sick of the answers of others but have no answers themselves. He is also a hero to those who, despite all evidence to the contrary, cannot smother their hope. Those who discern the leavening yeast sown in the hearts of humans across the planet; too inconspicuous to make the news, but destined to rise when the time is right.

Celebrate our 2nd birthday!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief

Watch the Köln 75 trailer