Explainer
America
Comment
6 min read

The Cold War, the Internet and America’s nones

How does a culture lose religion so rapidly? Stephen Bullivant investigates the American phenomenon of ‘nonversion'.

Stephen Bullivant is a professor of theology and sociology at St Mary’s University, UK, and the University of Notre Dame, Australia.

Image from the series South Park

Those even passingly familiar with American religious trends over the past couple of decades will have read or heard the phrase ‘rise of the nones’ countless times. And with good reason. While in Britain the proportion of people telling pollsters they have ‘no religion’ grew rapidly over the twentieth century – it was already 43% when the British Social Attitudes survey began in 1983; it tipped 50% a little over a decade later – in America the figure stayed stubbornly low. According to Gallup polls, only 5% of American adults identified with no religion in 1975. Twenty years later, in 1995, it was still 5%.

But then, seemingly very suddenly, things started to change. Beginning in the late nineties, then rapidly accelerating in the early 2000s, each new survey showed the nones getting bigger and bigger. Depending on which survey one looks at, nones now account for  somewhere between a quarter and third of American adults. Even at the lower end, that amounts to some 60 million people. And they’re still growing.

This raises a natural question: Why now? Or rather, what is it about the nineties and early 2000s that pushed or pulled large swathes out of thinking of themselves as religious? Various ways of measuring American religiosity all indicate that something significant must have happened around then. But what

A prior, deeper puzzle

That, at least, is the obvious way of approaching things. And to be fair, it has much to recommend it: something, or rather a combination of somethings, certainly did happen to American religion in those critical years. But this in itself raises a prior, deeper puzzle: why hadn’t the numbers of American nones already risen before the late nineties or early naughts? In all manner of other, quasi-comparable countries – Britain, Canada, Australia, France – the nones started growing steadily from the 1960s onwards. Yet while the sixties had all manner of other disruptive and destabilizing effects on American culture, society, politics, and religion, the proportion of nones grew only a little bit, then stopped.

At the risk of gross oversimplification, if one were to look for a sufficiently big ‘something’ within American society, mostly absent from those other countries, which could plausibly have kept non-religiosity artificially low in these decades, then there is an obvious candidate: the Cold War. Or more specifically, the precise and peculiarly religious way in which it was framed in the USA. 

A final, all-out battle

We Brits were as up to our neck in the Cold War as anyone. But only in America, I think, was the Cold War ever popularly framed as a “final, all-out battle between commu­nistic atheism and Christianity”, to quote Republican Senator Joseph McCarthy. Remember too that it was only in the mid-1950s that Congress adopted “In God We Trust” as America’s official motto, and “under God” was added to the Pledge. During the Pledge debates in Congress, the Democrat Louis C. Rabaut’s summed up a common view on both sides of the aisle:

“You may argue from dawn to dusk about differing po­litical, economic, and social systems but the fundamental issue which is the unbridgeable gap between America and Communist Russia is a belief in almighty God.”

This wasn’t just an issue with wide bipartisan and popular support view, it was thoroughly ecumenical too. While McCarthy and Rabaut were Catholics, it was a Presbyterian president, Eisenhower, who signed the “under God” bill into law. As Eisenhower himself put it during his 1952 election campaign:

“What is our battle against communism if it is not a fight between anti-God and a belief in the Almighty?”

Embellishing the city on a hill

It was also during the Cold War that presidents began likening America to the biblical “city built on a hill” – all the better positioned, one presumes, to scour the horizon for incoming Soviet missiles. Kennedy was the first US president to use it. Reagan, adding his own embellishment of “shining,” would make it his, and many of his countrymen’s, own. Taken together, all this helped lay down a deep, implicit association between being un-religious and being un-American. Atheism itself bore the brunt of this, but it more generally ruled out as­sociated ideas and identities – including thinking of oneself as having “no religion” – as live options for the great majority of Americans.

Riven fault lines

Meanwhile, the cultural fault lines that begin obviously opening up in the late sixties – gender equality, sexual liberation – kept on widening, with new generations socialized into ever more liberal baselines. This created a growing values gap between traditional Christian views and the wider mainstream culture, on topics that were very personal to, and thus felt very deeply by, people on all sides. This meant that, while churches tended to be most visible on the 'conservative side' of various battlegrounds, they were also often deeply riven by internal versions of the same debates. Not surprisingly, church attendance, at least within Catholic and mainline churches, started falling steadily in the seventies and (except where immigration has helped fill the pews) has never really stopped.

The Internet of ideas and identities

On this basic account – and there is much that could be, and elsewhere has been, added to it – the thawing of the Cold War is obviously significant. Note that it is the Millennial generation, only the youngest of whom are able to remember the Cold War (and even then mostly from holiday reruns of Red Dawn and Rocky IV), who were at the vanguard of the rise of the nones. They were also the first generation to be true digital natives, opening many of them up to a much wider range of ideas and identities than hitherto available. This has been especially effective at chipping away the walls of some of America’s stronger religious subcultures. My ex-Mormon interviewees, especially, cite “the wonderful thing called the internet” as being “the game-changer”.

Serious discussion and South Park

The Millennials started coming of age, and indeed responding to pollsters’ surveys, in the early 2000s. This was also around the time when, arguably for the first time since maybe the hugely popular writer and speaker  Robert “The Great Agnostic” Ingersoll a century before, unbelief was being seriously discussed everywhere from primetime talkshows to episodes of South Park. The bestselling books of the New Atheists – principally Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens – evidently hit upon some long pent-up demand. They were also, significantly here, able to position atheism, and 'no religion' more generally, as a panacea for a world awash with religion. Harris, for example, makes much of how he started writing The End of Faith on September 12th. Dawkins made no secret about his wanting to run adverts with an image of the Twin Towers and the tagline “Imagine no religion…”.

Cultural space opens

Whatever one makes of such arguments, similar rhetorical moves would have had less intuitive appeal to earlier American generations, learning to duck and cover from atheists’ H-bombs: the stuff of Americans’ nightmares were now those with too much religion, rather than not enough. While the long term impact of the not-so-New Atheism is hard to judge – many nones are keen to distance themselves from what they saw as its “dogmatism” and “extremism”, even while agreeing with much of it – it certainly helped open up ‘cultural space’ for being both American and non-religious that the Cold War had (outside of various enclaves, such as college towns and certain big cities) largely sealed shut. As we have seen, it is one that a good quarter of American adults are quite comfortable placing themselves within.

So yes, new somethings indeed happened in the final years of the twentieth century and the first years of the twenty-first: and these helped drive the uptick of nones. But these happened at the same time as the none-inhibiting effects of a much earlier something had more or less worn off, especially among the emerging genera­tions most affected by the new somethings. It is this combination of factors— akin to pulling one foot off the brake as you slam the other down on the accelerator— that explains quite why the nones rose so suddenly and (seemingly) out of nowhere.  

 

Article
Assisted dying
Care
Comment
Death & life
Suffering
5 min read

Why end of life agony is not a good reason to allow death on demand

Assisted dying and the unintended consequences of compassion.

Graham is the Director of the Centre for Cultural Witness and a former Bishop of Kensington.

A open hand hold a pill.
Towfiqu Barbhuiya on Unsplash.

Those advocating Assisted Dying really have only one strong argument on their side – the argument from compassion. People who have seen relatives dying in extreme pain and discomfort understandably want to avoid that scenario. Surely the best way is to allow assisted dying as an early way out for such people to avoid the agony that such a death involves?  

Now it’s a powerful argument. To be honest I can’t say what I would feel if I faced such a death, or if I had to watch a loved one go through such an ordeal. All the same, there are good reasons to hold back from legalising assisted dying even in the face of distress at the prospect of enduring or having to watch a painful and agonising death.  

In any legislation, you have to bear in mind unintended consequences. A law may benefit one particular group, but have knock-on effects for another group, or wider social implications that are profoundly harmful. Few laws benefit everyone, so lawmakers have to make difficult decisions balancing the rights and benefits of different groups of people. 

It feels odd to be citing percentages and numbers faced with something so elemental and personal and death and suffering, but it is estimated that around two per cent of us will die in extreme pain and discomfort. Add in the 'safeguards' this bill proposes (a person must be suffering from a terminal disease with fewer than six months to live, capable of making such a decision, with two doctors and a judge to approve it) and the number of people this directly affects becomes really quite small. Much as we all sympathise and feel the force of stories of agonising suffering - and of course, every individual matters - to put it bluntly, is it right to entertain the knock-on effects on other groups in society and to make such a fundamental shift in our moral landscape, for the sake of the small number of us who will face this dreadful prospect? Reading the personal stories of those who have endured extreme pain as they approached death, or those who have to watch over ones do so is heart-rending - yet are they enough on their own to sanction a change to the law? 

Much has been made of the subtle pressure put upon elderly or disabled people to end it all, to stop being a burden on others. I have argued elsewhere on Seen and Unseen that that numerous elderly people will feel a moral obligation to safeguard the family inheritance by choosing an early death rather than spend the family fortune on end of life care, or turning their kids into carers for their elderly parents. Individual choice for those who face end of life pain unintentionally  lands an unenviable and unfair choice on many more vulnerable people in our society. Giles Fraser describes the indirect pressure well: 

“You can say “think of the children” with the tiniest inflection of the voice, make the subtlest of reference to money worries. We communicate with each other, often most powerfully, through almost imperceptible gestures of body language and facial expression. No legal safeguard on earth can detect such subliminal messaging.” 

There is also plenty of testimony that suggests that even with constant pain, life is still worth living. Michelle Anna-Moffatt writes movingly  of her brush with assisted suicide and why she pulled back from it, despite living life in constant pain.  

Once we have blurred the line between a carer offering a drink to relieve thirst and effectively killing them, a moral line has been crossed that should make us shudder. 

Despite the safeguards mentioned above, the move towards death on the NHS is bound to lead to a slippery slope – extending the right to die to wider groups with lesser obvious needs. As I wrote in The Times recently, given the grounds on which the case for change is being made – the priority of individual choice – there are no logical grounds for denying the right to die of anyone who chooses that option, regardless of their reasons. If a teenager going through a bout of depression, or a homeless person who cannot see a way out of their situation chooses to end it all, and their choice is absolute, on what grounds could we stop them? Once we have based our ethics on this territory, the slippery slope is not just likely, it is inevitable.  

Then there is the radical shift to our moral landscape. A disabled campaigner argues that asking for someone to help her to die “is no different for me than asking my caregiver to help me on the toilet, or to give me a shower, or a drink, or to help me to eat.” Sorry - but it is different, and we know it. Once we have blurred the line between a carer offering a drink to relieve thirst and effectively killing them, a moral line has been crossed that should make us shudder.  

In Canada, many doctors refuse, or don’t have time to administer the fatal dose so companies have sprung up, offering ‘medical professionals’ to come round with the syringe to finish you off. In other words, companies make money out of killing people. It is the commodification of death. When we have got to that point, you know we have wandered from the path somewhere.  

You would have to be stony-hearted indeed not to feel the force of the argument to avoid pain-filled deaths. Yet is a change to benefit such people worth the radical shift of moral value, the knock-on effects on vulnerable people who will come under pressure to die before their time, the move towards death on demand?  

Surely there are better ways to approach this? Doctors can decide to cease treatment to enable a natural death to take its course, or increase painkillers that will may hasten death - that is humane and falls on the right side of the line of treatment as it is done primarily to relieve pain, not to kill. Christian faith does not argue that life is to be preserved at any cost – our belief in martyrdom gives the lie to that. More importantly, a renewed effort to invest in palliative care and improved anaesthetics will surely reduce such deaths in the longer term. These approaches are surely much wiser and less impactful on the large numbers of vulnerable people in our society than the drastic step of legalising killing on the NHS.