Article
Culture
Film & TV
6 min read

Oppenheimer’s Tower of Babel

Overwhelmed by the cinematic experience of Oppenheimer, Daniel Kim reflects on director Christopher Nolan's powerful modern mythmaking.

Daniel is an advertising strategist turned vicar-in-training.

An actor looks on as a film director stands beside him staring with his hands raised.
The modern Prometheus and the mythmaker. Cillian Murphy playing Robert Oppenheimer, stands next to director Christopher Nolan.
Universal Pictures.

The opening weekend for Oppenheimer has come and gone and the response has been almost unanimously glowing, even gushing.

And truly, the film is a technical masterpiece, demonstrating director Christopher Nolan is working at the height of his power.

The pitch-perfect performances from Cillian Murphy and the impressively star-studded cast, the transcendent yet intimate cinematography, Ludwig Görranson’s hauntingly triumphant score, and the remarkable pacing despite its three-hour runtime make for perfectly dialled-in cinema.

Some may struggle with the dialogue-heavy time-skipping narrative flow of the film, made particularly difficult by the inexplicable voice-muddying sound mix that seems to plague many of Nolan’s recent films. Despite the flaws, however, Oppenheimer is certainly one of the key cinematic moments of 2023. I don’t think I can add anything profoundly new to the gallons of electronic ink already spilt reviewing this film. 

Instead, what I can speak to is the most bizarre experience I had as the film came to a close. As the final shot of the biopic reached its climax and cut to black, I found myself suddenly and involuntarily dissolving into tears. I left the film feeling horrified yet inspired, sickened yet soaring, revelling in the triumph of an underdog technological victory as well as being confronted with the banal depravity of mankind. So much brilliance, yet so much brokenness. It invoked such a maximalist emotional response within me, that the only appropriate response my body could come up with was to weep. So… I am by no means an objective reviewer.  

Nolan’s depiction of the first nuclear test... is more like a religious epiphany rather than a run-of-the-mill movie explosion. 

To call Oppenheimer a ‘biopic’ would be like calling the book of Genesis a biography about Abraham. Nolan’s Oppenheimer takes more of the form of a Myth. ‘Myth’ not in the sense of fiction, but more in the sense that J.R.R. Tolkien or Carl Jung meant it - as a universal narrative that perfectly captures the spirit of the age. And in 2023, apocalyptic anxiety is very much in the air.  

Both Nolan and the biography that the film is adapted from - American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer - don’t shy away from the mythical and religious texture inherent to the story of the Manhattan Project and the development of the atomic bomb. 

Oppenheimer is Prometheus - who “stole fire from the gods and gave it to man. For this he was chained to a rock and tortured for eternity”. In fact, the film opens with this quote in white text over a slow-motion nuclear detonation, intertwining Oppenheimer’s life with that of the Greek Titan, Prometheus, who, having given technological fire to humankind, is chained to a tree by Zeus to have his guts eaten out by vultures for the rest of time.  

Oppenheimer is also the Hindu God, Krishna, who originally said the now infamous line, “I am become Death, the destroyer of worlds” from the Bhagavad Gita. The phrase he utters at the first test of his invention.

He is the man who decided to name the first test after Triune Christian God - The Trinity Test. The irony is thick. The great creator God of Christianity is represented by the great destroyer of worlds - the atomic bomb. In fact, Nolan’s depiction of the first nuclear test is more like a religious epiphany rather than a run-of-the-mill movie explosion. Some viewers might be disappointed by the impressionistic and almost surreal way the Trinity test is depicted at the climax of the film. Yet, I found the moment almost mystical. The blinding light of atomic devastation is the blinding light of divine glory.  

1940s New Mexico becomes the arena for the 21st Century’s struggle against itself and its fraught relationship with technology and morality.

The film doesn’t allow you to extricate the history from the myth, the science from the mystical, or the past from the present. The film explores the particular historical knots that you would expect from a film about Oppenheimer. The equal pride and guilt of the scientists who worked on the bomb post-Hiroshima; the banality of the American military industrial complex; the post-war Soviet nuclear threat; and the enigma of the man himself. There are some very powerful scenes that explore these themes with sickening and gut-wrenching effect. Yet, Nolan is fully aware that his film is in dialogue with the contemporary existential discussions about the dangers of AI, the fear of climate and political apocalypse, and the moral implications of technological progress at all costs.  

The star-studded cast is not only hugely impressive but also has the strange effect of continually dragging the historical context of Oppenheimer right into 2023. Nolan has used his considerable clout to draw together a cast of some of the most recognisable and celebrated icons of the 21st century from Cillian Murphy, Robert Downey Jr, and Emily Blunt to Gary Oldman, Rami Malek, and Matt Damon. Only Christopher Nolan could cast a leading man like Gary Oldman and give him 10 lines to say in a three-hour film.   

This creates a movie where the most iconic faces of our time come together to play their part in this myth. 1940s New Mexico becomes the arena for the 21st Century’s struggle against itself and its fraught relationship with technology and morality.  

In this way, Oppenheimer is more than just a cautionary tale from history. It becomes an icon of our time, in the religious sense. A manifestation of a universal story set in a particular context.   

What is three-hundred years of so-called progress, technology, and political theory culminating to? We have no idea. 

Many of us will be familiar with Joseph Campbell’s The Hero with a Thousand Faces. A work of comparative mythology which describes the archetypical hero found in the world of myths - The Hero’s Journey. Campbell calls this the Monomyth - the one story which every story is about. A hero ventures forth from his common world, encounters adversity and his inner demons, wins a decisive victory against the forces of death, and returns from this adventure forever changed and with the power to bestow wisdom to his community. This is Luke Skywalker, Aladdin, and Harry Potter but it’s not Oppenheimer.  

Christopher Nolan has seemed to have stumbled upon a different monomyth with his biopic. The story of a human community earnestly seeking technological knowledge of the heavenly powers, desiring to harness it, and ultimately unleashing it upon the earth only to discover its civilisation-destroying power. It’s the monomyth of the Tower of Babel. Technology reaching to the heavens resulting in the destruction of the city. But instead of a tower of brick and mortar, Oppenheimer’s tower is a pillar of fire and nuclear ash. Things might seem like grand progress in one moment, yet in the next, it’s annihilation.  

Nolan’s decision to make Oppenheimer a biopic has the uneasy effect of intermingling the myths of The Hero’s Journey and the Tower of Babel. Oppenheimer is the protagonist who undergoes all the key beats of the Hero’s Journey. Yet it is precisely this aspirational adventure that culminates in The Tower of Babel. It’s as if the film is saying that those who have most embodied The Hero’s Journey in our Modern Age are those who have also destroyed the world. Oppenheimer is but one example in a retinue of such technological geniuses. 

There is a haunting line in the film where one of Oppenheimer’s colleagues refuses to work with him on the bomb. He says:

“I don’t want the culmination of three-hundred years of physics to be a weapon of mass destruction.”  

This is still the anxiety that typifies our technological and political moment today. The only difference is, we don’t know where we’re culminating to. Where is three-hundred years of so-called progress, technology, and political theory culminating to? We have no idea.  

Maybe this is what struck such a deep primal chord with me as the credits rolled.  

Article
AI
Character
Culture
Digital
7 min read

Apple’s AI ads show how we can lose our moral skills

Apple Intelligence promises to safeguard us from the worst of ourselves.

Jenny is training to be a priest. She holds a PhD in law and writes at the intersection of law, politics and theology.

A worker at a desk sits back contemplating a situation
Dour Dale contemplates AI.
Apple.

“I got through the three stages of the interview process, and they said I had done well, but they aren’t hiring any computer science graduates anymore. AI is cheaper, and faster.”

John*, a bright 24-year-old coder and philosopher, has just completed an MSc in Computer Science from one of the top universities in the UK. And he can’t find a job. AI has outcompeted him. In a couple of years, he says, entry level into computer science as a field will require a PhD. What about in ten years, or twenty? Will the only people able to work in the field have to effectively be geniuses to keep up with a technology that’s metastasizing at the rate of knots? It felt painfully ironic to be discussing over coffee the death of an entire sector of meaningful jobs less than a week after the new Labour government announced its plans to “turbocharge” AI (Artificial Intelligence) as the saviour of the nation’s economy. What are we willing to sacrifice in the name of “national renewal”?  

As worrying as John’s story is, there is much more than jobs – and the skills, knowledge and social relations tied up in them – on the line when it comes to AI. The alleged saviour of the nation’s economy is after your soul as well, it turns out.  

This came home to me starkly over the Christmas holidays with the new advertisements for Apple Intelligence tools on MacBook Pro. In the first ad, “Lazy Lance” – a procrastinating business professional – sheepishly shifts in his seat. He has been asked to make a presentation on the new business prospectus, and he has been caught out, unprepared. But he is saved at the last moment. The click of the “Key Points” button using the new Apple Intelligence software on his MacBook Pro provides him with the critical breakdown summary needed to avoid becoming the pariah of the team. The sheepish shifting turns to smug smile: his substandard performance has evaded detection with the ready aid of Apple Intelligence.  

In the second ad, “Dour Dale” – a disgruntled office worker – writes a scathing email to the “monster” who has devoured his pudding from the communal fridge. Before clicking send on this missive, he raises his eyes from the raging words on his screen to see a pious teddy bear holding a love-heart which says “find your kindness.” This moral cue from a cuddly toy prompts Dave to select the “Friendly” button from the dropdown list on Apple Intelligence writing tools, which immediately converts his childish strop over pudding thievery into a mature response in which he kindly expresses his disappointment along with a polite request for the pudding to be returned. The only moral effort required of Dale is the click of a button; Apple Intelligence sorts out the bile and the blame and re-presents his pudding fury in a professionally palatable manner.  

These advertisements for AI tools are designed to provoke an empathetic laugh. Who indeed can honestly say they have never arrived unprepared to a meeting, or at least mentally penned a vindictive response to the tiniest office slight?  

AI is poised to strike at the root of our individual virtue, by inserting itself as an emotional regulator. 

However, underneath the easy laughs, I felt a profound sense of dis-ease when watching them. They indicate just how far AI has already begun to penetrate our moral economy. By inserting a technological tool to disguise or translate social interactions into new terms, our moral relations with each other are deceptively smoothed to avoid the social and personal costs of shame (e.g. Lance using “Key Points” rather than owning up to his poor work ethic) and anger (e.g. Dale using “Friendly” mode to transform his email from raging diatribe into courteous appeal). As appealing as it sounds to have automatic tech weapons to tranquilise social and emotional bugbears, they also remove daily opportunities to learn how to live and work together.  

For example, as excruciating as it is to be the person who came to the meeting woefully under-prepared, embarrassment can be a very useful corrective in learning the art of time management as well as the virtue of pulling our weight. We probably all know from school what it feels like to work on a group project, when only half the group cares about the outcome. If we do not learn moral skills of responsibility and accountability in our formative years, the workplace becomes a vital school for virtue in adulthood where we learn what it means to be trusted and how to be worthy of it. As in the case of Lance, AI now offers us everyday tools which help us to avoid embarrassment and effectively hide our lack of effort, taking the edge off of the very exposure that would help us to grow in both skill and trustworthiness. This is not propaganda for the Protestant work ethic but rather a top survival tip for the human soul in hyper-capitalist economy. Maintaining the moral significance of our labour as a school of formation in self-respect and trustworthiness does not baptise the extractive and exploitative nature of many workplaces. Rather, it offers a means of resistance to the soul-destroying idea that we are all replaceable, that nothing really matters and that our efforts are simply grist for the eternal and insatiable mill of market supply and demand.

In the case of Dale, Apple Intelligence goes beyond protecting users from social shame: it promises to safeguard us from the worst of ourselves. Of the two Apple Intelligence advertisements, I find Dale’s to be even more pernicious because it evidences how AI is poised to strike at the root of our individual virtue, by inserting itself as an emotional regulator. Rather than doing the difficult work of redrafting the email himself, which would require Dale to critically examine his own reactions and put himself into the shoes of the recipient, Apple Intelligence offers to do it automatically. By short-circuiting Dale’s process of recognising the emotions underneath his rage, he misses a critical opportunity to learn for himself what his anger is all about, and even more than that, to practice the art of genuine self-mastery in conflict. The AI tool smooths out the conflict on the surface, while Dale is presumably left with all those rotten feelings built up and unprocessed, because he has not had to do the difficult work of converting his aggressive monologue into a respectful dialogue with another human being.

The insertion of these seemingly innocuous AI tools into the spheres of our everyday, workaday lives introduces new means and modes of (self) deception in our habits, where we are able to hide much more easily from honest moral evaluation of the quality of our work as well as our interpersonal relationships. It also risks new heights of moral “de-skilling” over time as we live in a social and economic world that has become so deeply mediated by technology, to the point where we may very well eventually trust Apple as the gold standard of professional behaviour rather than our own discernment. The soul – our very interiority – is the new frontier of economic expansion, in the name of securing Britain’s place in the ranks of global competitiveness.

To AI enthusiasts, all this may sound like Luddite naysaying. Many people find AI tools helpful in the process of research and preparation. Even some priests, I have recently discovered, use Chat GPT to aid sermon-writing. And what, as a priest friend asked me recently, is the problem with these time-saving tools, as long as we use them critically?

Apart from the obvious answer that AI can’t be trusted to get all the facts right, let alone the word of God, this question presumes that human beings’ critical faculties and moral compasses remain fundamentally unaffected by these new technologies. It may be true for older generations (whose formative years occurred well before the meteoric surge of digital technology in the early 2000s) that technology continues to function as an optional extra to make life that little bit easier. But for Gen Z and below, and even for some younger millennials, intuitive digital technologies have become so fused with the ways that we learn and process information that it is no longer – if it ever was – a neutral tool to improve our lives. We are only learning now about the extent to which social media has thoroughly penetrated the emotional worlds of teenagers, with severe consequences for their wellbeing. What will be the consequences for the generations to come, when AI becomes so integrated into the emotional and social fabric of our lives that we cannot quite tell where we start and it begins? The risk with “turbocharging” AI is not only a huge number of jobs, but the atrophy of our moral muscles as AI encroaches further into the heartlands of what it means to be human. While a few tech elites may always stay one step ahead of AI and keep it safely in the toolbox rather than the driver’s seat, most of us time-poor plebians are being taken for the ride of our lives.

 

 *Name changed for anonymity. 

Join with us - Behind the Seen

Seen & Unseen is free for everyone and is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Alongside other benefits (book discounts etc.), you’ll receive an extra fortnightly email from me sharing what I’m reading and my reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief