Review
Books
Culture
6 min read

Are we being anxious about anxiety?

Haidt's diagnosis of a 'doomed' youth is off. Instead, we should learn from them.
A child sits atop a bunk bed holding a phone in front.

It’s common these days to hear about social anxiety, health anxiety, or climate anxiety – but I think I can see that a new pathology is beginning to emerge: anxiety anxiety. This is where parents, politicians, academics, or just members of society in general, start to get anxious about the fact that everybody is anxious. Diagnosis rates of clinical anxiety have shown a steep increase in the past decade, and numbers, we assume, don’t lie.  

Of the many outcomes of ‘anxiety anxiety’, one is going to be people who (with the absolute best intentions) want to suggest solutions. One such person is Jonathan Haidt, with his book The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness.  

From the spaceman on the cover to the opening vignette about sending our children to Mars, Haidt’s premise is clear: smartphones are the alien invaders of our society. These electronic parasites are feasting on the brain matter of our young people, directly causing what is now an epidemic of clinical anxiety and depression. 

I’m quite ready to read a sensible analysis of the impact of smartphone culture on mental health, so I was disappointed to find that Haidt’s book falls so far short of that. From a scientific perspective, the argument is a barrage of statistics, arranged to the tune of ‘correlation equals causation’. Given Haidt’s seniority in his field, this approach is surprisingly unsubtle, something which has already been heavily criticised by peer review. Numbers, it seems, do lie – or at least they can be easily curated to prove your point.  

But even if we accept Haidt’s point – which is that rates of smart-phone use (particularly social media) and rates of young people being diagnosed with anxiety disorders have increased over the same time period – what can be done? Haidt’s solution is to ban young people from owning smartphones at all until the age of 14, and from using social media until the age of 16, or even better 18. In this way, owning and managing one’s own device and its access becomes a rite of passage into adulthood. But note: whilst parents are urged to implement these unyielding boundaries for their children’s device-habits, Haidt does not ask grown-ups to make any changes to their own. Adults can continue with their current norms of smartphone use, ostensibly because their brains are fully developed, and they therefore have the maturity to handle their own risk to mental health.  

Smartphones are not aliens – they were designed by humans, and are willingly bought by humans, in response to the human need to communicate. 

Of course, it does not suit Haidt’s argument to analyse why adult mental health is also seeing an increase in diagnosis of anxiety disorders. It may be true to say that rates are rising more quickly amongst young people, but there is still no consensus as to how much of that can be attributed to young people simply being better informed about mental health and more empowered to seek help than the generations before them. Noticeably, young people today have a language to talk about anxiety that simply didn’t exist when I was a teenager in the 1990s, and ironically enough, it is social media that has made that possible. Although suicide rates are on the rise, they are still quite significantly lower among young people than they are for those aged over 35, and it should be noted that a proven pathway to suicide prevention amongst young people is access to self-help via smartphone apps.    

So whilst I am quite ready to believe that smartphone culture is one of many factors impacting the health and wellbeing of young people today, I think characterising smartphones as alien invaders, or as invasive parasites that have been selectively bred by Silicon Valley billionaires to infest the minds of our young people, seems to be a disingenuous response – and one that only serves to increase parental anxiety by implying that smartphones are sly, sentient beings, and out of our control. 

Smartphones are not aliens – they were designed by humans, and are willingly bought by humans, in response to the human need to communicate and a perfectly natural human desire to seek out entertainment and culture. True, technology and software are developed by billionaires, and marketing and algorithms can influence our choices – but at the end of the day, any developer will tell you that products only ever evolve in response to what the market demands. Adults: we have the money in our pockets; we are the market. 

As a more empathetic and intelligent generation, it seems they could probably teach us a few things about how to harness smartphone culture. 

In other words, we (the adults) selectively bred these ‘aliens’ ourselves – and rather than try (and no doubt fail) to lock up our experiment in a lab (or, as Haidt suggests, a lockable phone-pouch) we, the adults, have more than enough agency to continue that process of developing smartphones into devices that meet needs and provide entertainment in the way that they were always meant to do. In his defence, Haidt does refer to this approach briefly, but still only with a view to making the phones be for ‘us’ (the adults) and not ‘them’ (the young people) by removing content that appeals to a younger audience. To me feels like we are victim shaming the youth of today for the fact that they have inherited a problem created by their parents. 

One day when Jesus was teaching a crowd of followers, he advised them “Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye? How can you say to your brother, ‘Let me take the speck out of your eye,’ when all the time there is a plank in your own?” His point was about hypocrisy – it is far easier to judge someone else’s behaviour than it is to take responsibility for our own. Where did any of us last read or hear terrifying information about the decline in young people’s mental health? Was it on our smartphones?  

Here are a few things that Haidt’s selection of statistics doesn’t say about the youth of today. They are the most compassionate and empathetic generation that we have seen for decades (Konrath et. al., 2023). They are able to wait longer for rewards than their parent’s generation (Protzko, 2020), they are also less lazy, less narcissistic, more cooperative and more intelligent (Kriegel, 2016). In addition, whilst obvious damage is done by ‘filters’ on Instagram photos, making some young people strive for unattainable standards of beauty, it was the previous generation of smartphone users who began this trend, and it is the current generation of young people who can be credited with the #nofilter #nomakeup countertrends. This same generation is now fuelling the rise of insurgent social media sites such as Bereal, which emphasise the importance of authentic photos and meaningful connection with friends online.  

Overall, perhaps instead of restricting and controlling our young people’s online lives, as Haidt would have us do, we ought to be talking to them? As a more empathetic and intelligent generation, it seems they could probably teach us a few things about how to harness smartphone culture and develop it towards solutions to the problems that we ourselves created. 

Article
AI
Belief
Culture
Mental Health
Pride
4 min read

Are AI chatbots actually demons in disguise?

Early Christian thinkers explain chatbots better than Silicon Valley does

Gabrielle Thomas is Assistant Professor of Early Christianity and Anglican Studies at Emory University

An AI image of a person stood holding a phone with a bubble above their head, below them is a chatbot-like demon with a tail
Nick Jones/Midjourney.ai.

AI Chatbots. They’re here to save us, aren’t they? Their designers argue so, fervently. There’s no doubt they are useful. Some, like EpiscoBOT (formerly known as ‘Cathy’), are designed for those asking ‘life’s biggest questions. 'Our girlfriend Scarlett’, is an AI companion who “is always eager to please you in any way imaginable.”  So why not defend them?  

 They offer companionship for the lonely, spark creativity when we run on empty, and make us more productive. They also provide answers for any and every kind of question without hesitation. They are, in short, a refuge. Many chatbots come with names, amplifying our sense of safety. Names define and label things, but they do far more than that. Names foster connection. They can evoke and describe a relationship, allowing us to make intimate connections with the things named. When the “things” in question are AI chatbots, however, we can run into trouble.  

According to a study conducted by researchers at Stanford University, chatbots can contribute to “harmful stigma and dangerous responses.” More than this, they can even magnify psychotic symptoms. The more we learn, the more we are beginning to grasp that the much of the world offered by AI chatbots is an illusory one.  

Early Christian thinkers had a distinct category for precisely this kind of illusion: the demonic. They understood demons not as red, horned bodies or fiery realms, but as entities with power to fabricate illusions—visions, appearances, and deceptive signs that distorted human perception of reality. Demons also personified pride. As fallen angels, they turned away from truth toward themselves. Their illusions lured humans into sharing that pride—believing false greatness, clinging to false refuge. 

 Looking back to early Christian approaches to demonology may help us see more clearly what is at stake in adopting without question AI chatbots. 

  

According to early Christian thinkers, demons rarely operated through brute force. Instead, they worked through deception. Athanasius of Alexandria (c. 296–373) was a bishop and theologian who wrote Life of Antony. In this, he recounted how the great desert father was plagued by demonic visions—phantoms of wild beasts, apparitions of gold, even false angels of light. The crucial danger was not physical attack but illusion. Demons were understood as beings that manufactured appearances to confuse and mislead. A monk in his cell might see radiant light and hear beautiful voices, but he was to test it carefully, for demons disguise themselves as angels. 

Evagrius Ponticus (c. 345–399), a Christian monk, ascetic, and theologian influential in early monastic spirituality, warned that demons insinuated themselves into thought, planting ideas that felt self-generated but in fact led one astray. This notion—that the demonic is most effective when it works through appearances—shaped the entire ascetic project. To resist demons meant to resist their illusions. 

 Augustine of Hippo (354–430) was a North African bishop and theologian whose writings shaped Western Christianity. In his book The City of God, he argued that pagan religion was largely a vast system of demonic deception. Demons, he argued, produced false miracles, manipulated dreams, and inspired performances in the theatre to ensnare the masses. They trafficked in spectacle, seducing imagination and desire rather than presenting truth. 

 AI chatbots function in a strikingly similar register. They do not exert power by physical coercion. Instead, they craft illusion. They can produce an authoritative-sounding essay full of falsehoods. They can create images of people doing something that never happened. They can provide companionship that leads to self-harm or even suicide. Like the demonic, the chatbot operates in the register of vision, sound, and thought. It produces appearances that persuade the senses while severing them from reality. The risk is not that the chatbot forces us, but that it deceives us—just like demonic powers. 

Using AI chatbots, too, tempts us with illusions of pride. A writer may pass off AI-generated work as their own, for example. The danger here is not simply being deceived but becoming complicit in deception, using illusion to magnify ourselves. Early Christian theologians like Athansius, Evagrius and Augustine, warned that pride was the surest sign of demonic influence. To the extent that AI tempts us toward inflated images of ourselves, it participates in the same pattern. 

When it comes to AI chatbots, we need a discipline of discernment—testing whether the images and texts bear the marks of truth or deception. Just as monks could not trust every appearance of light, we cannot trust every image or every confident paragraph produced by the chatbots. We need criteria of verification and communities of discernment to avoid mistaking illusion for reality. 

Help is at hand.  

Through the ages, Christians have responded to demonic illusions, not with naïve credulity nor blanket rejection of the sensory world, but through the hard work of discernment: testing appearances, cultivating disciplines of resistance, and orienting desire toward truth.  

 The Life of Antony describes how the monk confronted demonic illusions with ascetic discipline. When confronted by visions of treasure, Antony refused to be moved by desire. When assailed by apparitions, he remained in prayer. He tested visions by their effects: truthful visions produced humility, peace, and clarity, while demonic illusions provoked pride, disturbance, and confusion. We can cultivate a way of life that does the same. Resisting the illusions may require forms of asceticism: fasting from chatbots and cultivating patience in verification.  

Chatbot illusions are not necessarily demonic in themselves. The key is whether the illusion points beyond itself toward truth and reality, or whether it traps us in deception.  

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief