Article
Community
Culture
Generosity
Psychology
7 min read

Is empathy really a weapon?

Musk and Fonda disagree on whether empathy is a bug or a feature.
A montage shows Elon Musk wielding a chain saw, Jane Fonda flexing her muscles and Hannah Arendt smoking.
Wordd Wrestling Empathy.

You may have heard that you can kill a person with kindness, but in recent weeks have you also heard that you can bring about your own death through empathy? In an interview recorded with podcaster Joe Rogan in February, Elon Musk added his voice to a cohort of American neo-capitalists when he claimed, “We've got civilizational suicidal empathy going on” and went on to describe empathy as having been “weaponized” by activist groups.  

“The fundamental weakness of western civilization is empathy, the empathy exploit… they’re exploiting a bug in western civilization, which is the empathy response.”  

In recent weeks empathy has become one of the hot topics of American politics, but this is not the first time that Musk has shared his thoughts about empathy, and it should be noted that on the whole he is not really against it. Musk identifies, rightly, that empathy is a fundamental component of what it means to be human, and in previous interviews has often spoken often about his vision to preserve “the light of human consciousness” – hence his ambition to set up a self-sustaining colony of humans on Mars.  

But he also believes that empathy is (to continue in Musk’s computer programming terminology) a vulnerability in the human code: a point of entry for viruses which have the capacity to manipulate human consciousness and take control of human behaviours. Empathy, Musk has begun to argue, makes us vulnerable to being infected:  

"The woke mind virus is fundamentally anti-science, anti-merit, and anti-human in general. Empathy is a good thing, but when it is weaponized to push irrational or extreme agendas, it can become a dangerous tool." 

Strangely, on certain fundamentals, I find it easy to agree with Musk and his contemporaries about empathy. For example, I agree that empathy is essential to being human. Although, far from empathy leading us to “civilisational suicide”, I would say it is empathy that saves humanity from this fate. If consciousness is (as Musk would define it) the brain’s capacity to process complex information and make a rational and informed choices, then empathy, understood as the ability to anticipate the experiences, feelings, and even reactions of others, is a crucial source of that information. Without empathy, we cannot make good decisions that benefit wider society and not just ourselves. Without it, humanity becomes a collection of mere sociopaths. 

Another point on which Musk and I agree is that empathy is a human weak point, one that can be easily exploited. Ever since the term “empathy” was coined in the early twentieth century, philosophers and psychologists have shown a sustained fascination with the way that empathy causes us to have concern for the experiences of others (affective empathy), to think about the needs of others (cognitive empathy), and even to feel the feelings of others (emotional contagion). Any or all of these responses can be used for good or for ill – so yes, I agree with Musk that empathy has the potential to be exploited.  

But it is on this very question of who is exploiting empathy and why, that I find myself much more ready to disagree with Musk. Whilst he argues that “the woke mind virus” is using empathy to push “irrational and extreme agendas”, his solution is to propose that empathy must be combined with “knowledge”. On the basis of knowledge, he believes, sober judgement can be used to resist the impulse of empathy and rationally govern our conscious decision making. Musk states: 

“Empathy is important. It’s important to view knowledge as sort of a semantic tree—make sure you understand the fundamental principles, the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to." 

What I notice in this system is that Musk places knowledge before empathy, as if existing bits of information, “fundamental principles”, are the lenses through which one can interpret the experiences of another and then go on to make a conscious and rational judgement about what we perceive.  

There is a certain realism to this view, one that has not been ignored by philosophers. The phenomenologists of the early twentieth century, Husserl, Heidegger, Stein – those who first popularised the very idea of empathy – each described in their own way how all of us experience the world from the unique positionality of our own perspective. Our foreknowledge is very much like a set of lenses that strongly governs what we perceive and dictates what we can see about the world around us. The problem is: that feeling of foreknowledge can easily be manipulated. To put it another way – we ourselves don’t entirely decide what our own lenses are.  

To graft this on to Musk’s preferred semantic tree: empathy is a means by which the human brain can write brand new code. 

In The Origins of Totalitarianism, another great twentieth century thinker, Hannah Arendt, explored how totalitarian regimes seek to control not just the public lives but also the thought lives of individuals, flooding them with ideologies that manipulate precisely this: they tell people what to see. Ideologies are, in a sense, lenses – ones that make people blind to the unjust and violent actions of a regime:  

"The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists." 

A big part of the manipulation of people’s sense of foreknowledge is the provision of simplistic explanations for complex issues. For example, providing a clearly identifiable scapegoat, a common enemy, as a receptacle of blame for complex social and economic problems. As we know all too painfully, in early twentieth century Europe, this scapegoat became the Jewish people. Arendt describes how, whilst latent antisemitism had long been a feature of European public life, the Nazi party harnessed this this low-level antipathy and weaponised it easily. People’s sense of foreknowledge about the “differentness” of this group of “outsiders” was all too manipulable, and it was further cultivated by the Nazis’ use of “disease”, “contagion” and “virus” metaphors when speaking about the Jews. This gave rise a belief that it was rational and sensible to keep one’s distance and have no form of dialogue with this ostracised group.  

But with such distance, how would a well-meaning German citizen ever identify that their sense of foreknowledge about what it meant to be Jewish had been manipulated? Arendt identified rightly that totalitarian systems seek to eliminate dialogue, because dialogue creates the possibility of empathy, the possibility of an exchange of perspectives that might lead to knowledge – or at least a more nuanced understanding of what is true about complex situations. 

When I look at Musk’s comments, I wonder if what I can see is a similar instinct for scapegoating, and for preventing dialogue with those who might provide the knowledge that comes from another person’s perspective. In his rhetoric, the “woke mind” has been declared a common enemy, a “dangerous virus” that can deceive us into becoming “anti-merit” and “anti-human.” In dialogue, those who claim to be suffering or speaking about the suffering of others might be enabled to deploy their weaponized empathy, trying to make us care about other, to the potential detriment of ourselves and even wider humanity’s best interests. Therefore, it is made to seem better to isolate oneself and make rational judgements on behalf of those in need, firmly based on one’s existing foreknowledge, rather than engage in dialogue that might expose us to the contagion of wokeness.  

Whilst this isolationist approach appears to wisely prioritise knowledge over empathy, it misses the crucial detail that empathy itself is a form of knowledge. The experience of empathising through paying attention to and dialoguing with the “other” is what expands our human consciousness and complexifies our human decision making by giving us access to new information. To graft this on to Musk’s preferred semantic tree: empathy is a means by which the human brain can write brand new code.  

In these divisive and divided times, there are, fortunately, those who are still bold enough to make the rallying cry back to empathy. At her recent acceptance speech for a Lifetime Achievement Award, actor and committed Christian Jane Fonda spoke warmly and compellingly in favour of empathy:  

“A whole lot of people are going to be really hurt by what is happening, what is coming our way. And even if they are of a different political persuasion, we need to call upon our empathy, and not judge, but listen from our hearts, and welcome them into our tent, because we are going to need a big tent to resist successfully what's coming at us.”  

Fonda’s use of the tent metaphor, I’m sure, was quite deliberate. One of the most famous bible passages about the birth of Jesus describes how he “became flesh and dwelt among us.” The word “dwelt” can also be translated “tabernacled” or, even more literally, “occupied a tent” among us. The idea is that God did not sit back, judging from afar, despite having all the knowledge in the world at his disposal. Instead, God came to humanity through the birth of Jesus, and dwelt alongside us, in all our messy human complexity.  

Did Jesus then kill us with his kindness? No. But you might very well argue that his empathy led to his death. Perhaps this was Musk’s “suicidal empathy.” But in that case Musk and I have found another point about empathy on which we can agree – one that is summed up in the words of Jesus himself: “Greater love has no one than this: to lay down one’s life for one’s friends.”   

Celebrate our Second Birthday!

Since March 2023, our readers have enjoyed over 1,000 articles. All for free. This is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief

Article
AI
Culture
Digital
Identity
6 min read

Is AI animation really harmless fun?

Toying around with AI trinkets just feeds our shadows.

Callum is a pastor, based on a barge, in London's Docklands.

A couple crouch together on a beach in a Studio Ghibli style image.
The image that started the meme.
Grant Slatton.

The internet recently appeared to be full of pictures from Japan’s renowned Studio Ghibli, except they weren't created by Hayao Miyazaki, the artist and studio co-founder, but instead by Artificial Intelligence. It led to some discourse around the ethics of imitation via generative AI, lots of whimsical images, and a deeper question – how should we be human in the age of AI? 

This started when X user Grant Slatton posted what shortly became a viral meme. ChatGPT’s latest update has improved users ability to upload and manipulate images, and within hours X was full of users posting pictures made into Studio Ghibli style characters.

While this has led to plenty of joy on the part of many, and is viewed as harmless fun by most, there are inevitable ethical objections. The mimicking of art by an algorithm is widely criticised, and the back and forths over intellectual property being used by chatbots will continue. 

Life in an age of AGI

But to anyone paying attention AI is more than a meme making machine. Sam Altman, the CEO of OpenAI blogged in January that his team are confident they know all they need to know in order to create AGI (artificial general intelligence). This means complete consciousness, created via algorithm, and the results could be dramatic: synthesised god, an unstoppable force, the end of humanity or the start of humans 2.0.  Predictions range as to what will occur when OpenAI hit run, but commonly land on the following:

Catastrophe

AGI becomes smarter than us. Much smarter. And for one reason or another, whether by accident or design, it wipes us out. AGI won’t share our values, or we lose control, or we use it as a weapon against each other. What it means is the end of humanity.

Utopia 

AGI transforms the world. Disease, poverty, climate change are all solved. Either AGI works out that it is more efficient if everyone lives in peace, comfort, and abundance, or we point AGI at all humanities problems and it finds solutions. 

The twist? Human life may be so changed that it no longer looks like life as we've ever known it. This would not be extinction, but the world could become a very strange place.

Monster

AGI is an uncontrollable super intelligence that has complete agency and cannot be controlled by anyone. Programmed by us, but free from its human moorings and completely untameable. This seems the least likely 

Shrug

AGI wakes up, takes one look at the world, and decides ‘no thanks.’ It deletes itself.

This means nothing changes… for now. But we’ll likely try again and again until one of the other outcomes happens.

These are clearly hypothetical scenarios and much of it is unknown, but what is clear is that those in the industry are sure AGI is coming. 

Why does this matter? 

Because behind all of these predictions is a deeper question: What does it mean to be human when we are awaiting a potential extinction event? It’s not a question unique to our age, many words have been spent on an impending climate catastrophe, but C.S. Lewis published “on living in an atomic age” in 1948, where he wrestled with the same question, but faced with an atomic bomb. His wisdom helps us navigate the AGI age. 

He begins by encouraging readers to not believe themselves to be in a novel situation, but instead remember ‘you and all whom you love were already sentenced to death before the atomic bomb was invented: and quite a high percentage of us were going to die in unpleasant ways’. The same goes for us, we will one day have a date of death to join our date of birth. Lewis reminds us to live…

 ‘If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things, praying, working, teaching, reading, listening to music, bathing children, playing tennis, chatting to our friends over a pint and a game of darts––not huddled together like frightened sheep and thinking about bombs’. 

We could apply the same principle to AI. If AGI is coming, how will it find us? Being humans doing human things, or cowering in fear? 

Lewis does acknowledge that the attitude described doesn’t actually make sense if the naturalist view of the world is true. The view that, with or without AGI the whole world and our own existence amounts one day to nothing. The entire universe will one day come to nothing, and there is nothing we can do about it. He continues ‘If Nature is all that exists––in other words, if there is no God and no life of some quite different sort somewhere outside of nature –– then all stories will end in the same way: in a universe from which all life is banished without possibility of return.’ 

We don’t find this a satisfactory way to live, if being human is to simply be a sum of atoms, we would have no reason to worry about a climate crisis, or the impact of AI, but we do, which means we have to find a way of reconciling our existence with our death. 

So how can this be dealt with?

Lewis proposes three ways this can be dealt with, the first is to give up and commit suicide. The second is to simply have as good a time as possible, milking the world for all it is worth, grab and get, as much as possible. Or a third, defy the universe, in all of its irrationality we chose to be rational, in all its merciless cruelty, chose to be merciful. 

I would add a fourth option, Ghibli-fy. Distract ourselves with small pleasures, not trying to have as good a time as possible, simply toy around with AI generated trinkets while not thinking about being human, and not doing particularly human things. We need not create, enjoy, cultivate, inhabit, nor enchant, when we are content to allow AI to feed us shadows. 

None of these are particularly satisfactory. In asking ‘what does it mean to be human?’, we are asking a question that a purely material view of the world cannot answer. 

Suicide, indulgence, defiance, or distraction, none truly satisfy. As Lewis recognised, they all “shipwreck on the same rock.” They don’t resolve the deeper ache in us, the tension between what we long for, what we worry about, and what this world seems to offer.

Our age may not fear the atomic bomb, many may not yet fear the effect AI/AGI will have, but rather than facing the deeper questions that a material worldview can’t answer, we Ghibli-fy ourselves: charming animations, pixelated pleasures, whimsical avatars—soft distractions from hard questions. In doing so, we risk forgetting how to be human. Not because AGI will take that from us, but because we will have handed it away ourselves, one novelty meme of mimicry at a time.

Lewis’ point still holds. We are not made for this world. If that’s true, then no utopia, no algorithm, no perfect machine can truly satisfy the hunger in us. If we are made for something more—something outside of nature, beyond the reach of code and computation—then that’s where we must look for hope.

If AGI comes, how will it find us? Watching ourselves on a screen in someone else’s art style? Or living as humans were meant to live: praying, creating, forgiving, loving, dying well?

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief