Explainer
AI - Artificial Intelligence
Culture
Digital
6 min read

Tech has changed: it’s no longer natural or neutral

The first in a three-part series exploring the implications of technology.

James is Canon Missioner at Blackburn Cathedral. He researches technology and theology at Oxford University.

A caveman holding a hammer looks at a bench on which are a broken bicycle and a laptop.
Nick Jones/Midjourney.ai.

My son was born in February last year and it seems that every day he is developing new skills or facial expressions and adorable quirks. Just the other day he was playing with some wooden blocks and when they inevitably fell over, he let out the most adorable giggle. As you can guess I immediately reached for my phone so that I could capture the moment. Moments like this happen all the time in the life of a modern parent- we want to share with our spouse, family, and friends or just capture the moment for ourselves because it’s something we treasure. And yet, in this series of articles I would like to consider this moment, and the thousands like it that take place in a technological society, and ask: is everything as benign as it seems? 

There are two ideas that often come up whenever people talk about technology. The first is that technology is basically ‘neutral’, that technology only becomes good or bad depending on what you are doing with it. “Look at a hammer,” someone might say, “there is nothing intrinsically good or bad about this hammer, only the end result is good or bad depending on whether I’m using it to hit nails or people!” On this reading of technology, the only important questions relate to the consequences of use.  

If technology is neutral, then the primary concern for users, legislators and technologists is the consequences of technology, and not the technology itself. The only way to ensure that the technology is used for good is to ensure, somehow, that more good people will use the technology for good things than bad people using it for bad things. Often this idea will present itself as a conversation about competing freedoms: very few people (with some important exceptions, see this article from Ezra Klein) are debating whether there is something intrinsically problematic about the app formerly known as Twitter, most discussion revolves around how to maintain the freedom of good users while curtailing the freedom of bad users. 

We assume that these tools of social interaction like Facebook and Instagram are, in and of themselves, perfectly benign. We are encouraged to think this by massive corporations who have a vested interest in maintaining our use of their platforms, and at first glance, they seem completely harmless: what could possibly be the problem with a website in which grandma can share photos of her cat? And while the dark underbelly of these platforms has violent real-world consequences – like the rise of antisemitism and anti-Muslim hatred – the solution is primarily imagined as a matter of dealing with ‘bad actors’ rather than anything intrinsically problematic with the platforms themselves. 

Jobs here draws a straight-line comparison between the bicycle and the PC. As far as Jobs is concerned, there is no quantitative difference in kind between the two tools.

The second idea is related but somewhat different: Advocates of modern technology will suggest that humanity has been using technology ever since there were humans and therefore all this modern technology is not really anything to worry about. “Yes, modern technology looks scary,” someone might say, “but it’s really nothing to worry about, humans have been using tools since the Stone Age don’t you know!” This view proposes that because hammers are technology, and all technology is the same, there is, therefore, no difference between a hammer and the internet, or between the internet and a cyborg.  

This second idea tends to be accompanied by an emphasis on the slow and steady evolution of technology and by highlighting the fact that at every major technological advancement there have been naysayers decrying the latest innovation. (Even Plato was suspicious of writing when that was invented). Taken as part of a very long view of human history, the technological innovations of the last 100 years seem to be a normal and natural part of the evolution of our species which has always set itself apart from the rest of the animal kingdom in its use of technology. 

Steve Jobs gives a good example of this in an interview he gave about the development PC: 

“I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condors used the least energy to move a kilometer. And humans came in with a rather unimpressive showing about a third of the way down the list… not too proud of a showing for the crown of creation… But then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And a human on a bicycle blew the condor away – completely off the top of the charts. 

And that’s what a computer is to me… It’s the most remarkable tool we’ve ever come up with… It’s the equivalent of a bicycle for our minds”  

Notice that Jobs here draws a straight-line comparison between the bicycle and the PC. As far as Jobs is concerned, there is no quantitative difference in kind between the two tools: one is more complex than the other but otherwise, they are just technologies that expand human capacity. “A Bicycle for our minds” is a fascinating way to describe a computer because it implies that nothing about our minds will be changed, they’ll just be a little bit faster. 

And yet, despite the attempts of thought leaders like Jobs to convince us that modern technology is entirely benign, many of us are left with a natural suspicion that there is more going on. As a priest in the Church of England, I often have conversations with parishioners and members of the public who are looking for language or a framework which describes the instinctive recognition that something has changed at some point (fairly recently) about the nature of the technology that we use, or the way that it influences our lives. That modern technology is not simply the natural extension of the sorts of tools that humans have been using since the Stone Age and that modern technology is not neutral but in significant ways has already had an effect regardless of how we might use it. How do we respond to such articulate and thoughtful people such as Steve Jobs who make a compelling case that modern technology is neutral and natural?  

I often have conversations with parishioners who are looking for language or a framework which describes the instinctive recognition that something has changed about the nature of the technology that we use, or the way that it influences our lives.

Thinking back to that moment with my son when he giggles and I take a photo of him, at first glance it seems completely innocuous. But what resources are available if I did want to think more carefully about that moment (and the many like it) which suffuse my daily life? Thankfully there is a growing body of literature from philosophers and theologians who are thinking about the impact of modern technology on the human condition.  In the next two articles I would like to introduce the work of Martin Heidegger, outline his criticism of modern technology, showing how he challenges the idea that technology is simply a natural extension of human capacity or a neutral tool.  

Heidegger is a complex character in philosophy and in Western history. There is no getting around the fact that he was a supporter of the Nazi Party during the second world war. His politics have been widely condemned and rightly so, nevertheless, his insights on the nature of modern technology continue to this day to provide insights that are useful. His claim is that modern technology essentially and inevitably changes our relationship with the world in which we live and even with ourselves. It is this claim, and Heidegger’s suggested solution, that I will unpack in the next two articles. 

Review
Ageing
AI - Artificial Intelligence
Culture
Film & TV
5 min read

Foundation shows you can’t ‘Ctrl+V’ a soul

A sci-fi classic unearths transhumanism’s flaws

Giles is a writer and creative who hosts the God in Film podcast.

A woman confronts a man whose clone stands behind her.
Apple TV.

One of the reasons that science fiction has had enduring popularity as a genre is its ability to illustrate thought experiments. The way it can attempt to answer questions that can’t even be asked in any other kind of fiction is what gives it power as a form of storytelling. One question that keeps coming up is: what if you could live forever, through technology?  

One person to attempt to answer this question is Isaac Asimov, one of the early giants of the sci-fi genre. Born in 1920, Asimov arrived into a world that was rapidly changing, and yet, his imagination was still able to outpace it. Much of what he is known for is his depiction of robots, with ‘Asimov’s laws of robotics’ influencing the depiction of androids in Star Trek: The Next Generation. However, direct adaptations of Asimov’s own work were few and far between. Robin Williams’ Bicentennial Man released in 1999 and Will Smith starred in I, Robot in 2004 were the best of the bunch. That is, until Apple TV began adapting Asimov’s Foundation

Asimov’s Foundation books were written across the span of fifty years. The premise of the stories is that in a distant future, a galactic empire is beginning to fail and cannot be saved. The mathematician Hari Seldon develops the theory of psychohistory, where he uses statistical laws to predict the future of large populations. In the wake of the empire’s fall, Seldon predicts a dark age lasting 30,000 years before a second empire arises. Seldon devises a plan to reduce this dark age to just one thousand years by preserving a ‘foundation’ of knowledge. The novels describe some of the dramatic events that frustrate, or are a result of Seldon's Plan. One of the features of the story that the Apple TV show of Foundation focuses on is attempted immortality.  

Foundation gives us three depictions of ‘immortality’. Firstly, Seldon orchestrates having his conscience eventually uploaded into the Prime Radiant, a super-computer in order to allow him to shepherd his plans beyond the limits of his own human lifespan. Secondly, his protégé, Gaal Dornick is throughout the first season put into a cryo-sleep that lets her move into the future without ageing. Finally, the characters of Dawn, Day and Dusk attempt immortality through cloning. The tyrannical emperor Cleon decided that the only person fit to succeed him was…himself. So, he creates a revolving triumvirate of his own clones: Brother Day, a Cleon in his prime; Brother Dusk, an aging Cleon who serves to advise Day; and Brother Dawn, a young Cleon being trained to succeed Brother Day. This "genetic dynasty" has been ruling with an iron fist for 400 years by the start of the series.  

These interpretations of immortality grant each character the ability to shape and curate history in a way that no one human could ever achieve. But as there’s no drama without conflict, Foundation shows us the downsides of this kind of immortality. Firstly, Gaal’s version, being frozen in cryo-sleep for decades might literally extend her life, but from Gaal’s perspective, it is no longer than it would have been otherwise. Whilst she does get to see history play out, she loses connections with people like her family and her lover Raych. She is unable to build the life she would have planned for herself.    

No-one mourns your absence because there’s an identical copy of you still walking about. 

Seldon’s version of immortality is flirted with by tech bros and transhumanists like Peter Thiel. The idea of a computer that has the processing power to replicate a human brain turns up in numerous stories, but it’s another false immortality. Firstly, the original Hari Seldon still dies, and the ‘digital version’ stored eventually in the Prime Radiant is merely a copy. We might not think much of copy and pasting a document or file on our computer, but it doesn’t quite work the same for human beings. A copy is not the same as the original. You can’t ‘Ctrl+V’ a soul. In addition to this, we find out at one point that due to a mistake, Hari’s digital self has been trapped in darkness, fully conscious but with no rest, no distractions and no way of communicating with the outside world for 148 years. This naturally drags Hari into an interminable madness.  

Lastly, the Empire run by the clones, Dawn, Day and Dusk suffer much the same problem as the other two. It’s not a real immortality; as each clone eventually dies. But in many ways, it’s even worse than death. No-one mourns your absence because there’s an identical copy of you still walking about. This is a trope that is troubling, because a protagonist dying and being returned via cloning is often presented as a ‘resurrection’. It has been used as a story arc in the X-men comics and in Peter Capaldi’s era of Doctor Who, with very little outcry from their respective fandoms. Possibly because the thought that the producers have canonically killed the main character and replaced them with an exact copy is simply too uncomfortable to consider. In Foundation itself, the clones are judged by their fidelity to the original (a cold and petty despot) and any deviation is met with a death sentence. Whilst clones may be one way to rule a sci-fi galactic empire, it’s possibly their inability to adapt to changing circumstances that contributes to the fall of civilisation.  

The great irony in all of these interpretations is; you are only immortal to those observing you, and an immortality that relies on perspective is not really an immortality at all.  

It seems that hard science fiction, and ancient Greek myths can at times, overlap in their focus. Viewed in one light, Asimov’s Foundation series can be seen as one long story of Prometheus, who steals fire from the gods to give it as a gift to mankind, only to be punished by Zeus for his actions. Asimov appears to be telling us that mankind can’t accurately predict the future and you can’t live forever. So despite being a staunch atheist, one of the great minds of science fiction might be suggesting that immortality may belong squarely in the realm of the divine.

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief