Explainer
AI
Culture
Digital
6 min read

Tech has changed: it’s no longer natural or neutral

The first in a three-part series exploring the implications of technology.

James is Canon Missioner at Blackburn Cathedral. He researches technology and theology at Oxford University.

A caveman holding a hammer looks at a bench on which are a broken bicycle and a laptop.
Nick Jones/Midjourney.ai.

My son was born in February last year and it seems that every day he is developing new skills or facial expressions and adorable quirks. Just the other day he was playing with some wooden blocks and when they inevitably fell over, he let out the most adorable giggle. As you can guess I immediately reached for my phone so that I could capture the moment. Moments like this happen all the time in the life of a modern parent- we want to share with our spouse, family, and friends or just capture the moment for ourselves because it’s something we treasure. And yet, in this series of articles I would like to consider this moment, and the thousands like it that take place in a technological society, and ask: is everything as benign as it seems? 

There are two ideas that often come up whenever people talk about technology. The first is that technology is basically ‘neutral’, that technology only becomes good or bad depending on what you are doing with it. “Look at a hammer,” someone might say, “there is nothing intrinsically good or bad about this hammer, only the end result is good or bad depending on whether I’m using it to hit nails or people!” On this reading of technology, the only important questions relate to the consequences of use.  

If technology is neutral, then the primary concern for users, legislators and technologists is the consequences of technology, and not the technology itself. The only way to ensure that the technology is used for good is to ensure, somehow, that more good people will use the technology for good things than bad people using it for bad things. Often this idea will present itself as a conversation about competing freedoms: very few people (with some important exceptions, see this article from Ezra Klein) are debating whether there is something intrinsically problematic about the app formerly known as Twitter, most discussion revolves around how to maintain the freedom of good users while curtailing the freedom of bad users. 

We assume that these tools of social interaction like Facebook and Instagram are, in and of themselves, perfectly benign. We are encouraged to think this by massive corporations who have a vested interest in maintaining our use of their platforms, and at first glance, they seem completely harmless: what could possibly be the problem with a website in which grandma can share photos of her cat? And while the dark underbelly of these platforms has violent real-world consequences – like the rise of antisemitism and anti-Muslim hatred – the solution is primarily imagined as a matter of dealing with ‘bad actors’ rather than anything intrinsically problematic with the platforms themselves. 

Jobs here draws a straight-line comparison between the bicycle and the PC. As far as Jobs is concerned, there is no quantitative difference in kind between the two tools.

The second idea is related but somewhat different: Advocates of modern technology will suggest that humanity has been using technology ever since there were humans and therefore all this modern technology is not really anything to worry about. “Yes, modern technology looks scary,” someone might say, “but it’s really nothing to worry about, humans have been using tools since the Stone Age don’t you know!” This view proposes that because hammers are technology, and all technology is the same, there is, therefore, no difference between a hammer and the internet, or between the internet and a cyborg.  

This second idea tends to be accompanied by an emphasis on the slow and steady evolution of technology and by highlighting the fact that at every major technological advancement there have been naysayers decrying the latest innovation. (Even Plato was suspicious of writing when that was invented). Taken as part of a very long view of human history, the technological innovations of the last 100 years seem to be a normal and natural part of the evolution of our species which has always set itself apart from the rest of the animal kingdom in its use of technology. 

Steve Jobs gives a good example of this in an interview he gave about the development PC: 

“I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condors used the least energy to move a kilometer. And humans came in with a rather unimpressive showing about a third of the way down the list… not too proud of a showing for the crown of creation… But then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And a human on a bicycle blew the condor away – completely off the top of the charts. 

And that’s what a computer is to me… It’s the most remarkable tool we’ve ever come up with… It’s the equivalent of a bicycle for our minds”  

Notice that Jobs here draws a straight-line comparison between the bicycle and the PC. As far as Jobs is concerned, there is no quantitative difference in kind between the two tools: one is more complex than the other but otherwise, they are just technologies that expand human capacity. “A Bicycle for our minds” is a fascinating way to describe a computer because it implies that nothing about our minds will be changed, they’ll just be a little bit faster. 

And yet, despite the attempts of thought leaders like Jobs to convince us that modern technology is entirely benign, many of us are left with a natural suspicion that there is more going on. As a priest in the Church of England, I often have conversations with parishioners and members of the public who are looking for language or a framework which describes the instinctive recognition that something has changed at some point (fairly recently) about the nature of the technology that we use, or the way that it influences our lives. That modern technology is not simply the natural extension of the sorts of tools that humans have been using since the Stone Age and that modern technology is not neutral but in significant ways has already had an effect regardless of how we might use it. How do we respond to such articulate and thoughtful people such as Steve Jobs who make a compelling case that modern technology is neutral and natural?  

I often have conversations with parishioners who are looking for language or a framework which describes the instinctive recognition that something has changed about the nature of the technology that we use, or the way that it influences our lives.

Thinking back to that moment with my son when he giggles and I take a photo of him, at first glance it seems completely innocuous. But what resources are available if I did want to think more carefully about that moment (and the many like it) which suffuse my daily life? Thankfully there is a growing body of literature from philosophers and theologians who are thinking about the impact of modern technology on the human condition.  In the next two articles I would like to introduce the work of Martin Heidegger, outline his criticism of modern technology, showing how he challenges the idea that technology is simply a natural extension of human capacity or a neutral tool.  

Heidegger is a complex character in philosophy and in Western history. There is no getting around the fact that he was a supporter of the Nazi Party during the second world war. His politics have been widely condemned and rightly so, nevertheless, his insights on the nature of modern technology continue to this day to provide insights that are useful. His claim is that modern technology essentially and inevitably changes our relationship with the world in which we live and even with ourselves. It is this claim, and Heidegger’s suggested solution, that I will unpack in the next two articles. 

Snippet
AI
Culture
Digital
Sustainability
4 min read

Turning yourself into an AI Barbie is the worst, most wasteful way of using artificial intelligence

Climate change trend warriors seem to forget just how much energy this technology uses 

Jean is a consultant working with financial and Christian organisations. She also writes and broadcasts.

An AI generated image of a Barbie-like Toy
AIn a Barbie world.

If you spend any time on any social media platform you would have probably seen the ChatGPT Barbie trend. Resembling packaged toys, the AI depicts you like a doll or action figure. At first, I thought I was only seeing it because of the LinkedIn algorithm. But then I started to see articles in my feed from mainstream media outlets teaching people how to do it.  

Generally, speaking, I am not a trend follower. I am one of those annoying people who doesn’t get involved with what everyone is doing just because everyone is doing it. Thankfully, I don’t suffer from FOMO (the Fear Of Missing Out) and I don’t think I am swayed much by peer pressure. But I like to stay informed about what is going on. So I can have something to talk about when I meet people in new settings and to remain relevant. So, when this started popping up in my feeds, I investigated it, and I was pleasantly surprised. 

I am not anti-AI. I have embraced and seen the benefits of AI in my own life (this sounds a bit weird, but I think you get my point). I understand and accept that it will, can and has improved productivity and creativity. I use ChatGPT all the time for social media content and captions, brainstorming, titles for articles, coding problems, research and language translations.  

But like many, I have long been sceptical about the growth of AI use and the viability of its long-term sustainability. I wouldn’t describe myself as a climate warrior, but I do believe that we have a responsibility to ourselves and the generations after us to use the finite resources of the planet frugally. The AI-powered Barbie trend throws that out of the window.  

The current Trump administration has facilitated a shift away from ESG (environmental, social and governance) targets in the world of business. For the most part, the criticism of this in the media (social and mainstream) has been focused on DEI targets. But perhaps, in the face of slow economic growth and because this began before the Trump administration took office, the move away from environmental targets or what I would call environmental stewardship, or frugality has received limited coverage.   

I have never understood why proponents of the climate emergency, have made themselves bedfellows and in some cases, wholehearted supporters of the AI revolution. A typical data centre uses between 11-19 million litres per day water just to cool its servers, that’s the equivalent of a small town of 30,000-50,000 people. The International Energy Agency (IEA) predicts by 2030 that there will be a doubling of electricity demand from data centres globally equating to slightly more than the entire electricity consumption of Japan. This growth will be driven by the use of AI in the US, China, and Europe. That’s why vocal support of the climate emergency and advocating escalated transition to AI, as is the position of the UK government, currently seems paradoxical to me.  

This isn’t hyperbole, Sam Altman, CEO of Open AI recently tweeted asking folks to reduce their use of the ChatGPT’s image generator because Open AI’s servers were overheating.  

That is why I have been pleasantly surprised, by some of coverage on the Barbie trend. Arguments are now being made more loudly about the true cost of unlimited AI expansion.  

I am not against progress or AI expansion entirely, and I have some support for the argument that governments have pursued net zero policies at a rate that is impractical, expensive and unviable for the average consumer in Western democracies. However, the Barbie trend reveals our tendency to choose waste and consumption for fleeting pleasure. For many of us, we have probably just thought, ‘It’s just a bit of harmless fun’. But the truth is it isn’t, it’s just that we can’t see the damage we are doing to the environment. That’s without going into the financial and privacy costs associated with the AI revolution. It really is a case of that age old adage, ‘Out of sight, out of mind’.  

The challenge is now that we know, what do we do? Do we continue to be part of wasteful AI trends? Or do we use AI to add value, increase productivity and solve problems?  

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief