Essay
AI
Culture
9 min read

Here’s why AI needs a theology of tech

As AI takes on tasks once exclusively human, we start to doubt ourselves. We need to set the balance right.

Oliver Dürr is a theologian who explores the impact of technology on humanity and the contours of a hopeful vision for the future. He is an author, speaker, podcaster and features in several documentary films.

In the style of an icon of the Council of Nicea, theologians look on as a cyborg and humanoid AI shake hands
The Council of Nicaeai, reimagined.
Nick Jones/Midjourney.ai

AI is all the rage these days. Researchers branching into natural and engineering sciences are thriving, and novel applications enter the market every week. Pop culture explores various utopian and dystopian future visions. A flood of academic papers, journalistic commentary and essays, fills out the picture.  

Algorithms are at the basis of most activities in the digital world. AI-based systems work at the interface with the analogue world, controlling self-driving cars and robots. They are transforming medical practices - predicting, preventing, diagnosing and supporting therapy. They even support decision-making in social welfare and jurisprudence. In the business sector, they are used to recruit, sell, produce and ship. Much of our infrastructure today crucially depends on algorithms. But while they foster science, research, and innovation, they also enable abuse, targeted surveillance, regulation of access to information, and even active forms of behavioural manipulation. 

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: What is there still categorically that distinguishes us from the machines we build? 

In all these areas, AI takes on tasks and functions that were once exclusive to humans. For many, the comparison and competition between humans and (algorithmically driven) machines are obvious. As these lines are written, various applications are flooding the market, characterized by their ‘generative' nature (generative AI). These algorithms, such OpenAI’s the GPT series, go further than anyone expected. Just a few years ago, it was hard to foresee that mindless computational programs could autonomously generate texts that appear meaningful, helpful, and in many ways even ‘human’ to a human conversation partner. Whether those innovations will have positive or negative consequences is still difficult to assess at this point.  

For decades, research has aimed to digitally model human capabilities - our perception, thinking, judging and action - and allow these models to operate autonomously, independent of us. The most successful applications are based on so-called deep learning, a variant of AI that works with neural networks loosely inspired by the functioning of the brain. Technically, these are multilayered networks of simple computational units that collectively encode a potentially highly complex mathematical function.  

You don’t need to understand the details to realize that, fundamentally, these are simple calculations but cleverly interconnected. Thus, deep learning algorithms can identify complex patterns in massive datasets and make predictions. Despite the apparent complexity, no magic is involved here; it is simply applied mathematics. 

Moreover, this architecture requires no ‘mental' qualities except on the part of those who design these programs and those who interpret their outputs. Nevertheless, the achievements of generative AI are astonishing. What makes them intriguing is the fact that their outputs can appear clever and creative – at least if you buy into the rhetoric. Through statistical exploration, processing, and recombination of vast amounts of training data, these systems generate entirely new texts, images and film that humans can interpret meaningfully.  

The remarkable and seemingly intellectual achievements of AI applications uniquely confront us with our self-understanding as humans: Is there still something categorically that distinguishes us from the machines we build? This question arises in the moral vacuum of current anthropology. 

Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve... Computers do not have problems, only unproblematic states they are in. 

The rise of AI comes at a time when we are doubting ourselves. We question our place in the universe, our evolutionary genesis, our psychological depths, and the concrete harm we cause to other humans, animals, and nature as a whole. At the same time, the boundaries between humans and animals and those between humans and machines appear increasingly fuzzy.  

Is the human mind nothing more than the sum of information processing patterns comparable to similar processes in other living beings and in machine algorithms? Enthusiastic contemporaries believe our current AI systems are already worthy of being called ‘conscious’ or even ‘personal beings.’ Traditionally, these would have been attributed to humans exclusively (and in some cases also to higher animals). Our social, political, and legal order, as well as our ethics, are fundamentally based on such distinctions.  

Nevertheless, companies such as OpenAI see in their product GPT-4 the spark of ‘artificial general intelligence,’ a form of intelligence comparable to or even surpassing humans. Of course, such statements are part of an elaborate marketing strategy. This tradition dates to John McCarthy, who coined the term “AI” and deliberately chose this over other, more appropriate, descriptions like “complex information processing” primarily because it sounded more fundable. 

Such pragmatic reasons ultimately lead to an imprecise use of ambiguous terms, such as ‘intelligence.’ If both humans and machines are indiscriminately called ‘intelligent,’ this generates confusion. Whether algorithms can sensibly be called ‘intelligent’ depends on whether this term refers to the ability to perform simple calculations, process data, the more abstract ability to solve problems, or even the insightful understanding (in the sense of Latin intellectus) that we typically attribute only to the embodied reason of humans.  

However, this nuanced view of ‘intelligence’ was given up under the auspices of the quest for an objectively scientific understanding of the subject. New approaches deliberately exclude the question of what intelligence is and limit themselves to precisely describing how these processes operate and function.  

Current deep learning algorithms have become so intricate and complex that we can’t always understand how they arrive at their results. These algorithms are transparent but not in how they reach a specific conclusion; hence, they are also referred to as black-box algorithms. Some strands in the cognitive sciences understand the human mind as a kind of software running on the hardware of the body. If that were the case, the mind could be explained through the description of brain states, just like the software on our computers.  

However, these paradigms are questionable. They cannot explain what it feels like to be a conscious person, to desire things, be abhorred by other things and to understand when something is meaningful and significant. They have no grasp on human freedom and the weight of responsibility that comes with leading a life. All of these human capacities require, among other things, an understanding of the world, that cannot be fully captured in words and that cannot be framed as a mathematical function.  

There are academic studies exploring the conception of embodied, embedded, enactive, and extended cognition, which offer a more promising direction. Such approaches explore the role of the body and the environment for intelligence and cognitive performance, incorporating insights from philosophy, psychology, biology, and robotics. These approaches think about the role our body as a living organism plays in our capacity to experience, think and live with others. AI has no need for such a living body. This is a categorical difference between human cognition and AI applications – and it is currently not foreseeable that those could be levelled (at least not with current AI architectures). Therefore, in the strictest sense, we cannot really call our algorithms ‘intelligent' unless we explicitly think of this as a metaphor. AI can only be called 'intelligent' metaphorically because these applications do not 'understand' the texts they generate, and those results do not mean anything to them. Their results are not based on genuine insight or purposes for the world in which you and I live. Rather they are generated purely based on statistical probabilities and data-based predictions. At most, they operate with the human intelligence that is buried in the underlying training data (which human beings have generated).  

However, all of this generated material has meaning and validity only for embodied humans. Strictly speaking, only embodied, living and vulnerable humans really have problems that they solve or goals they want to achieve (with, for example, the help of data-based algorithms). Computers do not have problems, only unproblematic states they are in. Therefore, algorithms appear 'intelligent' only in contexts where we solve problems through them. 

 When we do something with technology, technology always also does something to us. 

AI does not possess intrinsic intelligence and simulates it only due to human causation. Therefore, it would be more appropriate to speak of ‘extended intelligence': algorithms are not intelligent in themselves, but within the framework of human-machine systems, they represent an extension of human intelligence. Or even better would be to go back behind McCarthy and talk about 'complex information processing.’ 

Certainly, such a view is still controversial today. There are many philosophical, economic, and socio-political incentives to attribute human qualities to algorithms and, at the same time, to view humans as nothing more than biological computers. Such a view already shapes the design of our digital future in many places. Putting it bluntly, calling technology ‘intelligent’ makes money. 

What would an alternative, more holistic view of the future look like that took the makeup of humanity seriously?  

A theology of technology (Techniktheologie) tackles this question, ultimately placing it in the horizon of belief in God. However, it begins by asking how technology can be integrated into our lives in such a way that it empowers us to do what we truly want and what makes life better. Such an approach is neither for or against technology but rather sober and critical in the analytical sense. Answering those questions requires a realistic understanding of humans, technology, and their various entanglements, as well as the agreement of plural societies on the goals and values that make a good life.  

When we do something with technology, technology always also does something to us. Technology is formative, meaning it changes our experience, perception, imagination, and thus also our self-image and the future we can envision. AI is one of the best examples of this: designing AI is designing how people can interact with a system, and that means designing how they will have to adapt to it. Humans and technology cannot be truly isolated from each other. Technology is simply part of the human way of life.  

And yet, we also need to distinguish humans from technology despite all the entanglements: humans are embodied, rational, free, and endowed with incomparable dignity as images of God, capable of sharing values and articulating goals on the basis of a common (human) way of life. Even the most sophisticated deep learning applications are none of these. Only we humans live in a world where responsibility, sin, brokenness, and redemption matter. Therefore it is up to us to agree on how we want to shape the technologized future and what values should guide us on this path.  

Here is what theology can offer the development of technology. Theology addresses the question of the possible integration of technology into the horizon of a good life. Any realistic answer to this question must combine an enlightened understanding of technology with a sober view of humanity – seeing both human creative potential and their sinfulness and brokenness. Only through and with humans will our AI innovations genuinely serve the common good and, thus, a better future for all.  

 

Find out more about this topic: Assessing deep learning: a work program for the humanities in the age of artificial intelligence 

Article
Culture
Digital
Film & TV
Work
7 min read

What my film about the prodigal son really means

Our relentless focus on productivity devalues the things that make us human

Emily is designer and animator at the Theos think tank.

An animated man runs through a jungle.
In Sync with the Sun.
Theos.

Watch now

In his 2021 book 4,000 weeks: Time Management for Mortals, Oliver Burkeman observes that an obsession with productivity doesn’t give us more control over our lives, ‘instead, life accelerates, and everyone grows more impatient. It’s somehow vastly more aggravating to wait two minutes for the microwave than two hours for the oven - or ten seconds for a slow-loading web page versus three days to receive the same information by post.’ 

With technologies like artificial intelligence rapidly accelerating our lives, this constant demand to squeeze more into our time is not only limited to the mundane tasks that we have to do and wish we didn’t. It seeps into what we want to do and indeed must do in order to flourish: creating art, spending time in community, and caring for others. The problem is that these things cannot be measured in productivity metrics because they inherently do not function in that way. How do you measure how ‘productive’ a conversation is? Or a work of art? Artists such as Vincent Van Gogh or Emily Dickinson didn’t see their influence in their own lifetime. 

The more we measure our lives in productivity metrics, the more we devalue the things that make us human, ultimately making our lives and the world around us increasingly artificial. This is the basis of my recent film, In Sync with the Sun, which is a short animation about the rhythms of activity and rest that are written into our world, and what happens when an obsession with productivity takes over.  

I wrote the initial script for the film after a period of burnout. I was fully in the “make the most of every second” mindset, which left me feeling exhausted and confused about where my value resides. In response, I began researching the sleep-cycles of various animals and I was liberated by surprising details such as the fact that lions, which we see as mighty and majestic animals, sleep for around 21 hours a day. Even creatures like jellyfish, which don’t even have brains as far as we know, still have cycles of rest. Every living thing thrives in these rhythms of activity and rest, even down to plants and minuscule organisms. Our whole world is built on this pattern, in sync with the sun. Yet for us humans, our rhythms have been broken by technology, leaving us confused about our limitations and what we should do with our short lives.  

The film begins in nature, deep in the jungle where some leopards are sleeping. But the tranquility is abruptly interrupted by the voice-over declaring, “the war against sleep began when artificial broke into the night.” Brilliant white light breaks up the deep blues and purples on screen, until the screen is filled with blinding white. I wanted it to feel like that moment you peer at your phone in the middle of the night - the pain of your pupils trying to adjust. If you think about it, for 99.9 per cent of human history, our eyes would have never had to do that - until now.  

Artificial light wasn’t powerful enough to change that. Instead, it’s given us an unquenchable guilt about how we use our time. 

With his invention of the light bulb, Thomas Edison was determined to banish the night, and the limitations it enforced on us. Edison was known for being fiercely obsessed with productivity and, as a result, was an anti-sleep warrior who believed,

“There is really no reason why men should go to bed at all.”

As someone living a century on, I find it baffling to imagine that humans should eradicate sleep entirely. Perhaps because just 100 years later we are seeing the results that sleep-loss and over-working can have on our physical health and wellbeing. Maybe we cannot supersede nature after all, since we are an embedded part of it. It seems that “Sabbath" rest is written into our world and into our humanity. Artificial light wasn’t powerful enough to change that. Instead, it’s given us an unquenchable guilt about how we use our time. Now we decide when the day ends, so whoever can rest the least wins. 

The battle is still raging; incandescent bulbs only set aflame that root desire to be increasingly productive. The hamster wheel is spinning uncontrollably, and we must keep up. So, what do we do? The attempt to remove the limitations outside of us has revealed that they are in fact inside of us too. Therefore, the only way to keep up is to remove the human from the hamster wheel altogether. The failure of artificial light leads to the birth of artificial minds.  

 As a creative, this is what frustrates me most about artificial intelligence; that it is mostly being driven by this quest to bring everything under the reign of productivity. It goes without saying that this is greatly needed in some areas of society. Just like artificial light, it can and will do a lot of good in the world. However, when the obsession with productivity is prioritised over human flourishing, that’s when we know there is a big problem with how we view our lives.  

Thinking back to the examples of Van Gogh and Emily Dickinson; what is lost when we don’t allow space for artists, carers, mothers, or any skilled role that requires an element of patience? For me personally, I can’t force creative inspiration, instead it comes at me, often at times when I’m not looking for it. Similarly, sometimes that inspiration leads directly to an instant idea, but most often it’s a vague idea I jot down to which later life experiences and opportunities then build onto, forming it into something bigger and more in-depth. This could be compared to a role or situation that requires relationship building. Sometimes there are moments of instant bonding and “productive” progress in relationships, but it’s often more complex where external experiences or changes, which are outside of our control, may unexpectedly deepen understanding between people after long periods of frustration. 

In my animation, I used the metaphor of a butterfly to illustrate this sentiment. After the character realises he is not made for a life of relentless productivity, he steps out of the black and white skyscraper into the lush wilderness. A butterfly lands on his productivity badge and the voice over says, “You’re not a machine.” I imagine the Creator saying this to the loved creation. Creatures like butterflies seem completely unproductive to our human standards. They take weeks to form in the chrysalis and exist in the world for less time than that. Yet they are a source of wonder and beauty for anyone who has the privilege of seeing one up close. A reminder that nature is not in a rush. Where AI is concerned, however, speed and profit are the focus of desire. But looking at the world around us - that we are a part of - it’s clear that not everything can or should be valued by these limiting metrics alone. 

The overarching narrative of In Sync with the Sun is loosely inspired by the biblical story of the prodigal son. The main character has travelled far away from his home in pursuit of success, and he eventually realises that this master does not love him. At the end he comes home again, finding connection in community and in the good rhythm of productivity and rest that he came from. I wanted the film to address the issues that an unhealthy obsession with productivity can cause, and instead evoke a desire to accept and live more in sync with the boundaries and rhythms that are embedded in the natural world we are a part of.  

The film ends with the line, “The only thing that can stay awake is not awake at all.” In the midst of the changing world of AI, humans might be tempted to measure our productivity levels in comparison to these machines. However, technologies always raise the productivity bar higher and higher, and one day we need to accept that we simply aren’t going to be able to reach it. We don’t sit apart from nature like technology does, so let’s stop resenting that, and instead celebrate it. To quote Oliver Burkeman again,  

“the more you confront the facts of finitude instead - and work with them, rather than against them - the more productive, meaningful and joyful life becomes.” 

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief