Review
Books
Comment
Digital
Re-enchanting
9 min read

Re-enchanting the anxious generation

The future doesn’t have to be horrible.

Krish is a social entrepreneur partnering across civil society, faith communities, government and philanthropy. He founded The Sanctuary Foundation.

Two teenager lean against a rail, arms crossed, and laugh together.
LaShawn Dobbs on Unsplash.

I meet many anxious people as I wait for meetings in the Palace Westminster, but one in particular stands out. As I was queueing to get through security, a breathless American man rushed over asking if he was in the right place to meet the Minister of State for Universities. Once I had reassured him that he was, and he had caught his breath, I asked him where he was from and what he did for a job. He told me he was a social psychologist from New York. 

Funnily enough, the night before, I had been reading a book by a social psychologist from New York. I asked the man if he had come across the author, Jonathan Haidt. He replied with a smile: “I am Jonathan Haidt.” 

I chuckle when I remember that chance encounter, especially considering the title of his latest book – The Anxious Generation. The book tackles a much more serious topic than queueing nerves. It claims to show, in the words of the subtitle: “How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness”.  

The Anxious Generation is a tightly argued plea to parents and educators for a radical change in the way that young people are allowed to engage with digital technology in general and social media in particular.  

It follows the line of thought he began in his book The Coddling of the American Mind which argued that ‘helicopter parenting’ has led to such a fragility in young adults that universities are no longer places of open and free dialogue, but somewhere young people feel the need to be protected from ideas they disagree with. That problem was what Haidt was preparing to discuss with the Minister when we met outside Parliament.

“Embracing all this is a desire to maintain and hand on to our children an earth that offers genuine possibilities of flourishing.” 

Mary Grey

The Anxious Generation makes a compelling case for the way we are failing a generation of children. It likens the social media world to another planet that we are all happily sending our children off to without first learning about or checking any of the risks linked with the potentially toxic environment. It concludes that as much as we are overprotecting our children in the physical world, we are under-protecting them in the digital world, thereby complicit in the resulting tidal wave of mental health disorders.   

Haidt writes:  

“Are screen-based experiences less valuable than real-life flesh-and-blood experiences? When we’re talking about children whose brains evolved to expect certain kinds of experiences at certain ages, yes. A resounding yes.” 

Haidt argues that what children need is less screen time and more unsupervised play. Some might call this the re-enchantment of childhood– a rediscovery of wonder, and simple emotional connections with freedom, food, imagination, curiosity, those around them and the great outdoors. Perhaps there is healthy therapy to be found in this re-enchantment through the sharing of art, poetry, and fantasy. Maybe a rediscovery of faith and hope can help to bring healing.  

Mary Grey, Emeritus Professor of Theology at the University of Wales in Lampeter, describes re-enchantment like this: 

“The market’s language of desire must be replaced by reflecting what we really long for, like satisfying relationships and intimacy, meaningful communities where our values are shared, with working conditions that do not create an unbearable level of stress, enough income to cover basic and leisure needs, and planning for the future. Embracing all this is a desire to maintain and hand on to our children an earth that offers genuine possibilities of flourishing. … This is not an invitation to exchange reality for Magic Kingdoms, but to become embodied kinships of women, men, children and earth creatures in a re-imagined and transformed world of sustainable earth communities of healing and hope.” 

The re-enchantment of childhood is an attractive theory. I often find myself comparing my children’s childhood with that of my own. I’m sure I played more in the garden than they do, climbed more trees, cycled more round the block, round the town, and later round the county in my spare time. I remember as a teenager getting on a bus to travel from Brighton to Durham without either parents or phones. Around the same time, I travelled to Tbilisi, Georgia with just a backpack, a map, a couple of friends and quite a lot of self-confidence. I wish that my children could experience some of the pleasures that come with fixing a bike or looking up at the stars or browsing the library to find answers, instead of just googling.  

Yet, at the same time, if my children were making their way to Durham or Tbilisi today, I would certainly make sure they had plenty of charge on their phone and all the necessary mobile data roaming rights, and I would probably WhatsApp them regularly until they arrived safely at their destination.  

Haidt presents a perfect story, one that explains all the evidence. He doesn’t mention anything that might challenge it, or anything that the doesn’t quite fit.

Haidt’s book touches a nerve. Not just because of my own contradictory feelings as a parent, but because of the shocking statistics that reflect the wider state of our nation’s children. With waiting lists for Child and Adolescent Mental Health Services at a record high, a 47 per cent increase in young people being treated for eating disorders compared to pre-pandemic, and an enormous leap in prevalence of probable mental disorder from one in nine children (in England aged 8-25 years old in 2017) to one in five (similar cohort in 2023), the mental health of the next generation is rightly highly concerning.   

The blame has been levelled in many different directions: COVID lockdowns, school league tables, excessive homework, helicopter parenting, screen time, and general disenchantment in society at large.  Some even say the increase is directly related to the increase in public discussion and awareness about mental health disorders.  

For Haidt it is social media that is public mental health enemy number one. However, he does admit he is not a specialist in children’s mental health, child psychology or clinical psychology. This has led to some criticism about his conclusions. Professor Candice L. Odgers, the Associate Dean for research into psychological science and informatics at the University of California challenges head on the central argument of Haidt’s book. She claims:  

“...the book’s repeated suggestion that digital technologies are rewiring our children’s brains and causing an epidemic of mental illness is not supported by science. Worse, the bold proposal that social media is to blame might distract us from effectively responding to the real causes of the current mental health crisis in young people.” 

Similarly Henna Cundill, a researcher with the centre for autism and theology at the University of Aberdeen, wrote last week in an article for Seen and Unseen:  

“From a scientific perspective, the argument is a barrage of statistics, arranged to the tune of ‘correlation equals causation’. “ 

Cundill and Professor Odgers are right to be sceptical. Sometimes we let our commitment to a story shape the way that we read the evidence. If there’s one thing I remember from A- level statistics it is that causation and correlation should not be confused. In his bid to add urgency and cogency to his argument, Haidt presents a perfect story, one that explains all the evidence. He doesn’t mention anything that might challenge it, or anything that the doesn’t quite fit. It is not a scientific treatise - which is both the book’s strength and its weakness.  

Nevertheless, many of the recommendations Haidt suggests are wise and helpful. Even Professor Odgers, to some extent, agrees.  

“Many of Haidt’s solutions for parents, adolescents, educators and big technology firms are reasonable, including stricter content-moderation policies and requiring companies to take user age into account when designing platforms and algorithms. Others, such as age-based restrictions and bans on mobile devices, are unlikely to be effective in practice — or worse, could backfire given what we know about adolescent behaviour.” 

Therein lies the issue. Because of the lack of evidence for the causes, all we are left with – even from the experts – is what may or may not be likely to be effective in practice.   

I wonder if this paucity of robust scientific evidence stems from the fact that the issues facing the next generation are even more complex than we could ever imagine. 

The truth is that hype, hysteria and horror are more likely to gain traction than humdrum and happy medium. 

Every generation is different from the last. My own youth in the UK in the late 1980s when I became part of the video games and micro-computers subculture was just as much a mystery to my parents and teachers.  My generation’s problems were blamed on everything from the microwave to Mrs Thatcher to the milk that we drank following the disaster at Chernobyl.  

It seems to me too simplistic to demonise the technology. It’s an easy sell, after all. In fact, whenever there is a major technical shift, horror stories are created by those who believe the dangers outweigh the benefits. Mary Shelley’s Frankenstein seems to be a reaction to the industrial revolution. The nuclear threat led to movies about Godzilla and 60-foot-tall Amazonian women. The advent of the internet brought us the Terminator films.   

The truth is that hype, hysteria and horror are more likely to gain traction than humdrum and happy medium. Yet, despite the many and serious problems, the rise of new technologies, even social media, also have much to offer, and they are not going away soon. Instead of demonising new technology as the problem, perhaps we need to find ways to turn it into the solution.  

And perhaps there are glimmers of hope. I like the fact that my children are connected to the wider world, that they know people and languages from more diverse places than I ever did. I like that they know what is going on in the world way before the 9 o’clock news. I like the fact that they are on the cutting edge of advancements I will never experience in my lifetime. I like the fact that they can get their homework checked by AI, that they don’t need to phone me up every time they want to try a new recipe, that we can grumble together about the football match in real time even when we are on different sides of the world. I like that they can browse the Bible or listen to podcasts about history while they are waiting at a bus stop.  I like the fact that they have libraries of books at their fingertips, that they can disappear into fantasy worlds with a swipe and don’t have to spend hours at the job centre when they need to find work. And I love the fact that my children and their friends are rediscovering board games, crochet, embroidery and hiking and taking them to a whole new level because they are learning these crafts from experts around the world.  

I sincerely appreciate that Jonathan Haidt cares about the real and desperate problem of youth mental health. His book adds weight to the pleas of those of us advocating for urgent investment into this area. It reminds us of the world beyond the digital borders and it gives us hope that the re-enchantment of childhood is not impossible.  

However, the solution to these complex issues cannot be found in nostalgia alone. We cannot turn back the clock, nor should we want to. The past had problems of its own.  

I would love someone to write a book that looks forward, that equips young people to live in the worlds of today and tomorrow. If, by some strange coincidence, Jonathan Haidt is reading this article and is in the process of writing that book, I do hope I will bump into him again to thank him.  

Essay
AI
Comment
11 min read

The summit of humanity: decoding AI's affectations

An AI summit’s prophecies need to be placed in the right philosophical register, argues Simon Cross. Because being human in an AI age still means the same thing it has for millennia.

Simon Cross researches ethical aspects of technology and advises on the Church’s of England's policy and legislative activity in these areas.

An AI generated image of robot skulls with bulging eyes on a shelf receding diagonally to the left.
Alessio Ferretti on Unsplash.

The UK’s global artificial intelligence (AI) conference is nearly upon us. If the UK had a ‘prophecy office’ it would have issued a yellow or even amber warning for the first days of November by now. Prophecy used to be a dangerous business, the ancient text of Deuteronomy sanctioned death for false prophets, equating its force with a leading away from God as the ultimate ground of truth. But risks duly acknowledged, here is a prophecy about the prophecies to come. The global AI conference will loudly proclaim three core prophecies about AI. 

  1. This time it’s different. Yes, we said that before but this time it really is different. 
  2. Yes, we need global regulation but, you know, it’s complicated so only the kind of regulation we advise is going to work.  
  3. Look, if we don’t do this someone else will. So, you should get out of our way as much as you possibly can. We are the good guys and if you slow us down the bad guys will win. 

I feel confident about this prediction not because I wish to claim the office of prophet but because just like Big Tobacco and Big Oil, Big Tech’s lobbyists will redeploy a tried and tested playbook. And here are the three plays at the heart of it. 

Tech exceptionalism. (We deserve to be treated differently under the law.) 

Regulatory capture. (We got lucky, last time, with the distinction between platform and publisher that permitted self-regulation of social media, the harvesting of personal data and manipulative design for attention, but the costs of defeating Uber in California and now defending rearguard anti-trust lawsuits means lesson learned, we need to go straight for regulatory capture this time). 

Tech determinism. (If we don’t do it, someone else will. We are the Oppenheimers here.) 

Speaking of Pandora 

What should we make of these claims? We need to start by exploring an underlying premise. One that typically goes like this “AI is calling into question what it means to be human”. 

This premise has become common currency, but it is flawed because it is too totalising. AI emphatically is calling into question a culturally dominant version of human anthropology – one specific ‘science of humanity’. But not all anthropologies. Not the Christian anthropology.  

A further, unspoken, premise driving this claim becomes clearer when we survey the range of responses to the question “what does the advent of what the government is now calling ‘frontier’ AI portend?”  

Either, it means we have finally prized open Pandora’s box; the last thing humans will ever create. AI is our Darwinian evolutionary heir, soon to make us homo sapiens redundant, extinct, even. Which could happen in two very different ways. For some, AI is the vehicle to a new post-human eternal life of ease, roaming the farthest reaches of the universe in disembodied digital repose. To others, AI is now on the very cusp of becoming abruptly and infinitely cleverer than us. To yet others, we are too stupid to avoid blowing ourselves up on the way to inventing so-called artificial general intelligence.  

Cue main global summit speaking points… 

Or, 

AI is just a branch of computing. 

Which of these two starkly contrasting options you choose will depend on your underlying beliefs about ‘what it means to be human’. 

Universal machines and meat machines 

Then again, what does it mean to be artificially intelligent? Standard histories of AI always point to two seminal events. First, Alan Turing published a paper in the 1930s in which he proposed a device called a Universal Turing Machine.  

Turing’s genius was to see a way of writing a type of programme to control a computer’s underlying binary on/off in ways that could vary depending on the task required and yet perform any task a computer can do. The reason your computer is not just a calculator but an excel spreadsheet and a word processor and a video player as well is because it is a kind of Universal Turing Machine. A UTM can compute anything that can be computed. If it has the right programme.  

The second major event in AI folklore was a conference at Dartmouth College in the USA in the early 1950s bringing together the so-called ‘godfathers of AI’.

 This conference set the philosophical and practical approaches from which AI has developed ever since. That this happened in America is important because of the strong link between universities, government, the defence and intelligence industry and the Big Tech Unicorns that have emerged from Silicon Valley to conquer the world. That link is anthropological; it is political, social, and economic and not just technical. 

Let’s take this underlying question of ‘what does it mean to be human?’ and recast it in a binary form as befits a computational approach; ‘Is a human being a machine or is a human being an organism?’ 

Cognitive scientist Daniel Dennett was recently interviewed in the New York Times. For Dennett our minds and bodies are a “consortia of tiny robots”. Dennett is an evolutionary biologist and a powerful voice for a particular form of atheism and its answer to the question ‘what does it mean to be human?’ Dennett regards consciousness as ephemera, a by-product of brain activity. Another godfather of AI, Marvin Minsky, famously described human beings as ‘meat machines.’

By contrast, Joseph Weizenbaum was also one of the early computer pioneers in the 1960s and 1970s. Weizenbaum created one of the first ever chatbots, ELIZA– and was utterly horrified at the results. His test subjects could not stop treating ELIZA as a real person. At one point his own secretary sat down at the terminal to speak to ELIZA and then turned to him and asked him to leave the room so she could have some privacy. Weizenbaum spent the latter part of his professional life arguing passionately that there are things we ought not to get computers to do even if they can, in principle, perform them in a humanlike manner. To Joseph Weizenbaum computers were/are fundamentally different to human beings in ways that matter ineluctably, anthropologically. And it certainly seems as if the full dimensionality of human being cannot yet be reduced to binary on/off internal states without jettisoning free will, consciousness and transcendence. Prominent voices like Dennett and Yuval Noah Harari are willing to take this intellectual step. Their computer says ‘no’. By their own logic it could not say otherwise. In which case here’s a third way of asking that seemingly urgent and pressing question about human being;  

“Are we just warm, wet, computers?” 

The immanent frame 

A way to make sense of this, for many people, influential and intuitively attractive meaning of human being is to understand how the notion of artificial intelligence fits a particular worldview that has come to dominate recent decades and, indeed, centuries. 

In 2007 Charles Taylor wrote A Secular Age. In it he tracks the changing view of what it means to be human as the Western Enlightenment unfolds. Taylor detects a series of what he calls ‘subtraction stories’ that gradually explain away the central human experience of transcendence until society is left with what he calls an ‘immanent frame’. Now we are individual ‘buffered selves’ insulated by rational mind so that belief in any transcendent reality, let alone God, is just one possible choice among personal belief systems. But, says Taylor, this fracturing of a shared overarching answer to the question ‘What does it mean to be human’ over the past, say, 500 years doesn’t actually answer the question or resolve the ambiguities. Rather, society is now subject to what Taylor calls ‘cross pressures’ and a lack of societal consensus about the answers to the biggest questions of human meaning and purpose. 

In this much broader context, it becomes easier to see why as well as how it can be the case that AI is either a profound anthropological threat or just a branch of computing – depending on who you talk to… 

The way we describe AI profoundly influences our understanding of it. When Dennett talks about a ‘consortia of tiny robots’ is he speaking univocally or metaphorically? What about when we say that AI “creates”, or “decides” or “discovers” or ‘seeks to maximise its own reward function’. How are we using those words? If we mean words like ‘consortia’ or ‘choose’ and ‘reward’ in as close to the human sense as makes no difference, then of course the difference between us and our machines becomes paper-thin. But are human beings really a kind of UTM? Are UTMs really universal? Are you a warm wet computational meat-machine?  

Or is AI just the latest and greatest subtraction story?

To say AI is just a branch of computing is not to say the harms of outsourcing key features of human being to machines are trivial. Quite the opposite. 

How then should we judge prophecies about AI emanating from this global conference or in the weeks and months to follow?  I suggest two responses. The first follows from my view of AI, the other from my view of human being.  

Our view of current AI should be clear eyed, albeit open to revision should future development(s) so dictate. I am firmly on the side of those who, without foreclosing the possibility, see no philosophical breakthrough in the current crop of tools and techniques. These are murky philosophical waters but clocks don’t really have human hands now do they, and a collapsed metaphor can’t validate itself however endemic the reference to the computational theory of mind has become.  

Google’s large language model, Bard, for example, has no sense of what time it is where ‘he’ is, let alone can freely choose to love you or not, or to forgive you if you hurl an insult at ‘him’. But all kinds of anthropological harms already flow from the unconscious consequences of re-tuning human being according to the methodological image of our machines. To say AI is just a branch of computing is not to say the harms of outsourcing key features of human being to machines are trivial. Quite the opposite. 

Which brings me to the second response. When you hear the now stock claim that AI is calling into question what it means to be human, don’t buy it. Push back. Point out the totalising lack of nuance. The latest tools and techniques of AI are calling a culturally regnant but philosophically reductive anthropology into question. That much is definitely true. But that is all. 

And it is important to resist this totalising claim because if we don’t, an increasingly common and urgent debate about the fullness of human being and the limitations of UTMs will struggle from the start. One of the biggest mistakes I think public theology made twenty-some years ago was to cede a normative use of language that distinguished between people of faith and people of no faith. There is no such thing as being human without faith commitments of one kind or another. If you have any doubt about this, I commend No One Sees God: The Dark Night of Atheists and Believers by Michael Novak. But the problem with accepting the false distinction between ‘having faith’ and having ‘no faith’ is that it has allowed the Dennetts and Hararis of this world to insist that atheism is on a stronger philosophical footing than theism. After which all subsequent debate had, first, to establish the legitimacy of faith per se before getting to the particular truth claims in, say, Christianity.  

What it means to be human 

I see a potentially similar misstep for anthropology – the science of human being – in this new and contemporary context of AI. Everywhere at the moment, and I mean but everywhere, a totalising claim is being declared ever more loudly and urgently: that the tools and techniques of AI are calling into question the very essence of human identity. The risk in ceding this claim is that we get stuck in an arid debate about content instead of significance; a debate about ‘what it means to be human’ instead of a debate about ‘what it means to be human.’  

This global AI summit’s proclamations and prophecies need to be placed in the right philosophical register, because to be human in an age of AI still means the same thing it has for millennia.  

Universals like wonder, love, justice, the need for mutually meaningful relationships and a sense of purpose, and so too personal idiosyncrasies like a soft spot for the moose are central features of what it means to be this human being.  

Suchlike are the essential ingredients of the ‘me’ that is reading this article. They are not tertiary. Perhaps they can be computationally mimicked but that does not mean they are, in themselves, ephemeral or mere artifice. In which case their superficial mimicry carries substantial risks, just as Joseph Weizenbaum prophesied in Computer Power and Human Reason in the 1970s.  

Of course, you may disagree. You may even disagree in good faith, for there are no knockdown arguments in metaphysics. And in my worldview, you are free to do so. But fair warning. If the human-determinism of Dennett or the latest prophecies of Harari are right, no credit follows. You, and they, are right only because by arbitrary alignment of the metaphysical stars, you, and they, have never been free to be wrong. It was all decided long ago. No need for prophecies. We are all just UTMs with the soul of a marionette  

But when you hear the three Global summit prophecies I predicted earlier, consider these three alternatives; 

This time is not different, it is not true that AI is calling into question all anthropologies. AI is (only) calling into question a false and reductive Enlightenment prophecy about ‘what it means to be human.’  

The perennial systematic and doctrinal anthropology of Christianity understands human being as free-willed, conscious, unified body soul and spirit.  It offers credible answers to the urgent questions and cross-pressures society is now wrestling with. It also offers an ethical framework for answering the question ‘what ought computers to be used for and what ought computers not to be used for – even if they appear able to be used for anything and everything? 

This Christian philosophical perspective on the twin underlying metaphysical questions of human being and purpose are not being called into question, either at this global summit or by any developments in AI today or the foreseeable future. They can, however, increasingly be called into service to answer those questions – at least for those with ears to hear.