Essay
AI - Artificial Intelligence
Comment
11 min read

The summit of humanity: decoding AI's affectations

An AI summit’s prophecies need to be placed in the right philosophical register, argues Simon Cross. Because being human in an AI age still means the same thing it has for millennia.

Simon Cross researches ethical aspects of technology and advises on the Church’s of England's policy and legislative activity in these areas.

An AI generated image of robot skulls with bulging eyes on a shelf receding diagonally to the left.
Alessio Ferretti on Unsplash.

The UK’s global artificial intelligence (AI) conference is nearly upon us. If the UK had a ‘prophecy office’ it would have issued a yellow or even amber warning for the first days of November by now. Prophecy used to be a dangerous business, the ancient text of Deuteronomy sanctioned death for false prophets, equating its force with a leading away from God as the ultimate ground of truth. But risks duly acknowledged, here is a prophecy about the prophecies to come. The global AI conference will loudly proclaim three core prophecies about AI. 

  1. This time it’s different. Yes, we said that before but this time it really is different. 
  2. Yes, we need global regulation but, you know, it’s complicated so only the kind of regulation we advise is going to work.  
  3. Look, if we don’t do this someone else will. So, you should get out of our way as much as you possibly can. We are the good guys and if you slow us down the bad guys will win. 

I feel confident about this prediction not because I wish to claim the office of prophet but because just like Big Tobacco and Big Oil, Big Tech’s lobbyists will redeploy a tried and tested playbook. And here are the three plays at the heart of it. 

Tech exceptionalism. (We deserve to be treated differently under the law.) 

Regulatory capture. (We got lucky, last time, with the distinction between platform and publisher that permitted self-regulation of social media, the harvesting of personal data and manipulative design for attention, but the costs of defeating Uber in California and now defending rearguard anti-trust lawsuits means lesson learned, we need to go straight for regulatory capture this time). 

Tech determinism. (If we don’t do it, someone else will. We are the Oppenheimers here.) 

Speaking of Pandora 

What should we make of these claims? We need to start by exploring an underlying premise. One that typically goes like this “AI is calling into question what it means to be human”. 

This premise has become common currency, but it is flawed because it is too totalising. AI emphatically is calling into question a culturally dominant version of human anthropology – one specific ‘science of humanity’. But not all anthropologies. Not the Christian anthropology.  

A further, unspoken, premise driving this claim becomes clearer when we survey the range of responses to the question “what does the advent of what the government is now calling ‘frontier’ AI portend?”  

Either, it means we have finally prized open Pandora’s box; the last thing humans will ever create. AI is our Darwinian evolutionary heir, soon to make us homo sapiens redundant, extinct, even. Which could happen in two very different ways. For some, AI is the vehicle to a new post-human eternal life of ease, roaming the farthest reaches of the universe in disembodied digital repose. To others, AI is now on the very cusp of becoming abruptly and infinitely cleverer than us. To yet others, we are too stupid to avoid blowing ourselves up on the way to inventing so-called artificial general intelligence.  

Cue main global summit speaking points… 

Or, 

AI is just a branch of computing. 

Which of these two starkly contrasting options you choose will depend on your underlying beliefs about ‘what it means to be human’. 

Universal machines and meat machines 

Then again, what does it mean to be artificially intelligent? Standard histories of AI always point to two seminal events. First, Alan Turing published a paper in the 1930s in which he proposed a device called a Universal Turing Machine.  

Turing’s genius was to see a way of writing a type of programme to control a computer’s underlying binary on/off in ways that could vary depending on the task required and yet perform any task a computer can do. The reason your computer is not just a calculator but an excel spreadsheet and a word processor and a video player as well is because it is a kind of Universal Turing Machine. A UTM can compute anything that can be computed. If it has the right programme.  

The second major event in AI folklore was a conference at Dartmouth College in the USA in the early 1950s bringing together the so-called ‘godfathers of AI’.

 This conference set the philosophical and practical approaches from which AI has developed ever since. That this happened in America is important because of the strong link between universities, government, the defence and intelligence industry and the Big Tech Unicorns that have emerged from Silicon Valley to conquer the world. That link is anthropological; it is political, social, and economic and not just technical. 

Let’s take this underlying question of ‘what does it mean to be human?’ and recast it in a binary form as befits a computational approach; ‘Is a human being a machine or is a human being an organism?’ 

Cognitive scientist Daniel Dennett was recently interviewed in the New York Times. For Dennett our minds and bodies are a “consortia of tiny robots”. Dennett is an evolutionary biologist and a powerful voice for a particular form of atheism and its answer to the question ‘what does it mean to be human?’ Dennett regards consciousness as ephemera, a by-product of brain activity. Another godfather of AI, Marvin Minsky, famously described human beings as ‘meat machines.’

By contrast, Joseph Weizenbaum was also one of the early computer pioneers in the 1960s and 1970s. Weizenbaum created one of the first ever chatbots, ELIZA– and was utterly horrified at the results. His test subjects could not stop treating ELIZA as a real person. At one point his own secretary sat down at the terminal to speak to ELIZA and then turned to him and asked him to leave the room so she could have some privacy. Weizenbaum spent the latter part of his professional life arguing passionately that there are things we ought not to get computers to do even if they can, in principle, perform them in a humanlike manner. To Joseph Weizenbaum computers were/are fundamentally different to human beings in ways that matter ineluctably, anthropologically. And it certainly seems as if the full dimensionality of human being cannot yet be reduced to binary on/off internal states without jettisoning free will, consciousness and transcendence. Prominent voices like Dennett and Yuval Noah Harari are willing to take this intellectual step. Their computer says ‘no’. By their own logic it could not say otherwise. In which case here’s a third way of asking that seemingly urgent and pressing question about human being;  

“Are we just warm, wet, computers?” 

The immanent frame 

A way to make sense of this, for many people, influential and intuitively attractive meaning of human being is to understand how the notion of artificial intelligence fits a particular worldview that has come to dominate recent decades and, indeed, centuries. 

In 2007 Charles Taylor wrote A Secular Age. In it he tracks the changing view of what it means to be human as the Western Enlightenment unfolds. Taylor detects a series of what he calls ‘subtraction stories’ that gradually explain away the central human experience of transcendence until society is left with what he calls an ‘immanent frame’. Now we are individual ‘buffered selves’ insulated by rational mind so that belief in any transcendent reality, let alone God, is just one possible choice among personal belief systems. But, says Taylor, this fracturing of a shared overarching answer to the question ‘What does it mean to be human’ over the past, say, 500 years doesn’t actually answer the question or resolve the ambiguities. Rather, society is now subject to what Taylor calls ‘cross pressures’ and a lack of societal consensus about the answers to the biggest questions of human meaning and purpose. 

In this much broader context, it becomes easier to see why as well as how it can be the case that AI is either a profound anthropological threat or just a branch of computing – depending on who you talk to… 

The way we describe AI profoundly influences our understanding of it. When Dennett talks about a ‘consortia of tiny robots’ is he speaking univocally or metaphorically? What about when we say that AI “creates”, or “decides” or “discovers” or ‘seeks to maximise its own reward function’. How are we using those words? If we mean words like ‘consortia’ or ‘choose’ and ‘reward’ in as close to the human sense as makes no difference, then of course the difference between us and our machines becomes paper-thin. But are human beings really a kind of UTM? Are UTMs really universal? Are you a warm wet computational meat-machine?  

Or is AI just the latest and greatest subtraction story?

To say AI is just a branch of computing is not to say the harms of outsourcing key features of human being to machines are trivial. Quite the opposite. 

How then should we judge prophecies about AI emanating from this global conference or in the weeks and months to follow?  I suggest two responses. The first follows from my view of AI, the other from my view of human being.  

Our view of current AI should be clear eyed, albeit open to revision should future development(s) so dictate. I am firmly on the side of those who, without foreclosing the possibility, see no philosophical breakthrough in the current crop of tools and techniques. These are murky philosophical waters but clocks don’t really have human hands now do they, and a collapsed metaphor can’t validate itself however endemic the reference to the computational theory of mind has become.  

Google’s large language model, Bard, for example, has no sense of what time it is where ‘he’ is, let alone can freely choose to love you or not, or to forgive you if you hurl an insult at ‘him’. But all kinds of anthropological harms already flow from the unconscious consequences of re-tuning human being according to the methodological image of our machines. To say AI is just a branch of computing is not to say the harms of outsourcing key features of human being to machines are trivial. Quite the opposite. 

Which brings me to the second response. When you hear the now stock claim that AI is calling into question what it means to be human, don’t buy it. Push back. Point out the totalising lack of nuance. The latest tools and techniques of AI are calling a culturally regnant but philosophically reductive anthropology into question. That much is definitely true. But that is all. 

And it is important to resist this totalising claim because if we don’t, an increasingly common and urgent debate about the fullness of human being and the limitations of UTMs will struggle from the start. One of the biggest mistakes I think public theology made twenty-some years ago was to cede a normative use of language that distinguished between people of faith and people of no faith. There is no such thing as being human without faith commitments of one kind or another. If you have any doubt about this, I commend No One Sees God: The Dark Night of Atheists and Believers by Michael Novak. But the problem with accepting the false distinction between ‘having faith’ and having ‘no faith’ is that it has allowed the Dennetts and Hararis of this world to insist that atheism is on a stronger philosophical footing than theism. After which all subsequent debate had, first, to establish the legitimacy of faith per se before getting to the particular truth claims in, say, Christianity.  

What it means to be human 

I see a potentially similar misstep for anthropology – the science of human being – in this new and contemporary context of AI. Everywhere at the moment, and I mean but everywhere, a totalising claim is being declared ever more loudly and urgently: that the tools and techniques of AI are calling into question the very essence of human identity. The risk in ceding this claim is that we get stuck in an arid debate about content instead of significance; a debate about ‘what it means to be human’ instead of a debate about ‘what it means to be human.’  

This global AI summit’s proclamations and prophecies need to be placed in the right philosophical register, because to be human in an age of AI still means the same thing it has for millennia.  

Universals like wonder, love, justice, the need for mutually meaningful relationships and a sense of purpose, and so too personal idiosyncrasies like a soft spot for the moose are central features of what it means to be this human being.  

Suchlike are the essential ingredients of the ‘me’ that is reading this article. They are not tertiary. Perhaps they can be computationally mimicked but that does not mean they are, in themselves, ephemeral or mere artifice. In which case their superficial mimicry carries substantial risks, just as Joseph Weizenbaum prophesied in Computer Power and Human Reason in the 1970s.  

Of course, you may disagree. You may even disagree in good faith, for there are no knockdown arguments in metaphysics. And in my worldview, you are free to do so. But fair warning. If the human-determinism of Dennett or the latest prophecies of Harari are right, no credit follows. You, and they, are right only because by arbitrary alignment of the metaphysical stars, you, and they, have never been free to be wrong. It was all decided long ago. No need for prophecies. We are all just UTMs with the soul of a marionette  

But when you hear the three Global summit prophecies I predicted earlier, consider these three alternatives; 

This time is not different, it is not true that AI is calling into question all anthropologies. AI is (only) calling into question a false and reductive Enlightenment prophecy about ‘what it means to be human.’  

The perennial systematic and doctrinal anthropology of Christianity understands human being as free-willed, conscious, unified body soul and spirit.  It offers credible answers to the urgent questions and cross-pressures society is now wrestling with. It also offers an ethical framework for answering the question ‘what ought computers to be used for and what ought computers not to be used for – even if they appear able to be used for anything and everything? 

This Christian philosophical perspective on the twin underlying metaphysical questions of human being and purpose are not being called into question, either at this global summit or by any developments in AI today or the foreseeable future. They can, however, increasingly be called into service to answer those questions – at least for those with ears to hear.  

Essay
AI - Artificial Intelligence
Culture
13 min read

Machines and their ghosts

What impacts has artificial intelligence had on society, past, present and future? Simon Cross explores just where have our machines got us.

Simon Cross researches ethical aspects of technology and advises on the Church’s of England's policy and legislative activity in these areas.

A complex of linear and metal parts in a machine-like sculpture.
Machine complexity, in sculptural form.
Ruth Hartnup, CC BY 2.0, via Wikimedia Commons.

But Humanity, in its desire for comfort, had over-reached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine. 

E. M. Forster

Human cosmology has changed over the millennia. Not only from the heliocentric to the relativistic but also from organic to mechanistic. Our success in deconstructing nature and exploiting those discoveries to construct ever more capable machines now persuades many that the soul is illusory and the universe made only of physical objects reconfigurable in new and novel ways according to particular mathematical relationships. And yet. And yet the debate about our latest machines, about intelligence, and about the mysterious ghost of human consciousness – let alone soul - continues unresolved across the ages.  

The ghost in the AI machines of the past

The journey from Charles Babbage’s unfinished analytical engines to Elon Musk’s complete business empire of rockets, robot-cars and social media rants is familiar to many. Karel Čapek drew on the Slavonic Orthodox word for servitude or serfdom when he baptised the word robot in his 1920 play, R.U.R., or Rossum’s Universal Robots. Čapek’s machines eventually gained a soul but only in the final act of the play. While the term artificial intelligence (AI) is attributed to a gathering at Dartmouth College in New Hampshire, it was Alan Turing who successfully conceptualised how to fabricate robots like those of Čapek’s imagination. Turing neatly sidestepped the pesky question of whether such ‘universal Turing machines’ need human-like consciousness (let alone a soul) in a famous 1950 thought experiment posterity simply calls the Turing Test.  

The invention of finely controlled micro-processors and their ever tighter transcription onto silicon chips enabled the architecture of increasingly complex algorithmic mathematical operation. After which came operating systems with simple and accessible user interfaces and programmes exploiting a prolific increase in speed and memory. So too the invention by Tim Berners-Lee of an internet with open protocols that, via Mosaic and its browser progeny, has become the operational backbone of the world wide web. All are tales already familiar or easily told using a now ubiquitous search engine. 

A main feature of the past twenty years has been the network effect. This has concentrated power in a handful of companies, initially the FAANGs (Facebook, Apple, Amazon, Netflix and Google) but now too their Chinese counterparts Tencent and ByteDance (owners of TikTok). A European counterpart is conspicuously absent. 

The Ghost in the AI machines of the present

More recently still, advances in types of machine learning and the invention of a new suite of tools called 'transformers' has given rise to AI that increasingly resembles its human creators in one task or another even if the furore over Brad Lemoyne and Googles’ LaMDA (Language Model for Dialogue Applications) proves the relationship between intelligence, artifice, and consciousness remains deeply contested.  

The metaphysical nature of artificial consciousness notwithstanding, it is also worth reflecting, however, on what these machines may be doing to our souls – metaphorical or otherwise. Where have our machines got us?

Two features define the technological landscape of today: data and prediction. Exactly how those ingredients combine depends on the machine in view. 

Satellite around earth
AI helps interpretate atmospheric data into weather forecasts. While below, the Internet itself now accounts for around 2% of carbon emissions. IMAGE CREDIT: ESA–J. Huart, CC BY-SA IGO 3.0

Some of our machines are focussed on the external world. Data gathering, its interpretation and use for prediction underpin a whole suite of tasks from geophysical remote sensing to weather forecasting and predicting real-time energy demand; to medical image interpretation for diagnosis; to monitoring and managing replacement life cycles of critical infrastructure. Not forgetting that the internet itself now accounts for around 2% of annual global emissions.

But many of our machines are focussed on the internal: the mental and psychological world of human being. In the machines of entertainment and social media, data and prediction serve a mundane but vital goal of securing our attention to facilitate advertising. Every user of the web is simultaneously subject and object, exposed to adverts and tailored content (though how tailored it really is, is moot according to some recent research from Mozilla showing that user controls have little effect on which videos YouTube’s influential AI recommends). We are concurrently enmeshed in a secondary and highly sophisticated real-time bidding market that captures trades and parses data about us every time we connect to the web. Shoshanna Zuboff calls it surveillance capitalism.  

Ever find it tough to stop doomscrolling or to put your own portable machine down for very long? That’s partly because constant experimentation identifies the best type of presentation, not just content, to captivate you most personally. But when it comes to corralling attention, data, prediction, and seductive design aren’t the only options. Friction makes signing up easy but quitting difficult by design, while dark patterns add subliminal twists like ambiguously labelled toggles and countdown clocks that nudge us toward actions that favour the product or service provider. Herbert Simon calls it all the attention economy. 

Yet human souls being what they are, anger, argument and scandal are good for business. 

Social media companies are, for reasons buried in the history of American legislation, free from any regulatory responsibility for the content they carry. Yet human souls being what they are, anger, argument and scandal are good for business.  Clickbait arose because algorithms tuning us to surrender our attention neither know nor care how they succeed, which often means a drift towards more extreme content with every run of the autoplay function that is set to on by default and by design.  

Our design and use of these machines thus reflects the state of our collective souls.

The large data sets many of these machines feed off contain societal structures and values implicitly. This only becomes clear when careless labelling and/or processing at the statistical scale perpetuate rather than correct for biases and unjust social structures embedded in the data. Some of our machines inadvertently crystalize inequity, perpetuating harms to society by cementing social and financial exclusion, or through racially biased facial recognition, or predictive policing algorithms

Our design and use of these machines thus reflects the state of our collective souls, sometimes for good but sometimes for evil. 

Legislation to address such varied challenges and mitigate some of the harms is now in train in Europe and the UK, and also promised in America. But there is much ground to make up. And the tragic suicide of teenager Molly Russell shows how ineffective protection, especially from the machinery of social media, is for the children of today with unpredictable consequences for society’s future.  

Damaged souls indeed. 

Much has also been made of an imminent Web3 and associated metaverse. On the evidence to date, however, this is more akin to a virtual goldrush in which virtual land and activity thereon can be monetised with the largest profits promised to the first generation of settlers. Claims are staked using NFTs (non-fungible tokens) bought with crypto currencies and deposited on the blockchain. Molly White shows just how soulless much of this new, and alarmingly wild, west really is.  

Investing tens of billions of dollars per year in the metaverse or a single product like Alexa might signal the scale of rewards just around the now virtual corner. But history may equally decide this is an era of malinvestment by a global 1% awash with cheap, quantitatively eased capital and, if not ‘#FOMO’, at least insufficient institutional memory of financial bubbles of yore. Yet even ‘Big Tech’s biggest corporate behemoths are now enduring the chill winds of a tech unicorn winter almost as intense as the one afflicting crypto land.  

Machines with Souls? A ghostly forecast of what lies ahead

Forster’s The Machine Stops envisages a dystopian future where society is unable to maintain the machinery on which it has become dependent. His intuition that the new airships of his own day portended a key infrastructure of the future illustrates the hazards of future-casting. Some nascent technologies fail to live up to the hype (ahem…blockchain and driverless cars, anyone?) and artificial general intelligence (AGI) seems forever destined to be just a few more years “perhaps a decade”  away, although Elon Musk has yet to accept Gary Marcuse’s bet on that timeline. 

So let me venture two more modest but still speculative predictions; one positive and one problematic.  

Positively, the years ahead promise much increase in human augmentation of many kinds. A range of health and medical benefits are now in view, from efficiency gains in healthcare provision and design of medication at molecular level to bespoke pharmacological prescription based on individualised biological markers. Expect more wearable tech to supplement smartwatches.  

Some anticipate an overarching machine of almost Forsteresque proportions via the internet of things (IoT) although political and economic battles over device interoperability and security will, I think, garner increasing public attention and debate in due course.  

Augmented reality will substantially improve safety, , and will shift many enhancements from screen to full field of view with additional benefits for road users and pedestrians alike.  

Increasingly sophisticated geospatial sensing and data processing will enhance our understanding of the climate and biosphere emergencies and how successful various remedial steps prove. New technologies may radically reprice the costs of decarbonisation and unlock energy solutions that remain, as Babbage’s first difference engine was in his own day, the stuff of contemporary dreams. 

 This may be the first industrial revolution to be a net eliminator of jobs, although whether that promises to be good news is moot because navigating the consequences would be deeply challenging both socially and politically. Most of all, I anticipate a proliferation of new technologies and machines over the next few decades that will bolster and complete the reuse and recycle portions of a genuinely circular economy, together with an increasing emphasis on finite planetary budgets.  

We are on the cusp of a new and novel post-McLuhan era.

Now the problematic development. Top of the list is our newest and hottest ability: to mimetically recreate the surface view of reality using language itself. There are, it seems to me, profound risks posed by the very latest tools of natural language processing like Google’s LaMDA, Microsoft’s ChatGPT and Meta’s Galactica and Cicero.  

The Web to date has been an epistemological wonder. Knowledge has, of course, always been socially embedded. Wikipedia provides an enormous open-access repository of socially agreed knowledge. The discussion pages associated with any article can be hotbeds of debate but the active role of human editors in moderating and agreeing what counts as factual knowledge is both intrinsic and essential to the role that Wikipedia plays in informing and maintaining a flourishing society.  

Marshall McLuhan famously asserted that “the medium is the message”. But now we are on the cusp of a new and novel post-McLuhan era where the machine literally and autonomously manufactures the words and messages it then also mediates, doing both at super-human speed. This new generative AI machinery for reconfiguring words and images carries many consequences some of which are difficult to predict and some of which may be profoundly negative. Just read these headlines. From CNN: These artists found out their work was used to train AI.Now they’re furious. And, from Forbes: Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots.

Beyond dreams of electric sheep – AI hallucinates

Babbage's Difference Engine no. 1 was conceived to save the government money by preventing the mistakes that almost always crept into tables calculated or copied by hand. But these ultra-modern machines don’t just calculate or copy, they probabilistically infer - which does not necessarily lead to the best explanation. In fact, it does not always lead to possible explanation. Large language models (LLMs) like LaMDA, ChatGPT and Galactica ‘hallucinate’, transitioning seamlessly (though unpredictably, from our perspective) from predicting words and strings in ways that match the actual world, to predicting words and strings that portray an unreal world.  

Why does such hallucination happen? The crucial distinction is that human knowledge is consciously and not just socially embedded. But our new machines do not reason the way we do; cannot reason the way we do. As Erik Larson argues persuasively in The Myth of Artificial Intelligence, abductive reasoning of the kind Charles Sanders Pierce outlines, and inference to best explanation, are not yet in the realm of the suite of techniques gathered anywhere under the rubric of the ‘AI’ these machines practise. 

The consequences can be amusing, but experimentation also shows how difficult these models are to defend against deliberate manipulation by so-called ‘prompt injection’ and the online world is packed to the rafters with bad actors, whether individual or state, enthusiastic to get their hands on a machine that will opaquely mix real-world information with hallucination and then use it to quickly produce and instantly distribute misinformation at the touch of a button. Imagine, for example, an AI generated paper that includes a real scientist but cites and then summarises a paper she never actually wrote. Or imagine an AI that presents a stylistically convincing case for the benefits of consuming ground glass because it ‘knows’ about dietary silica. You don’t need to. Its already here: Meta Galactica AI Model Suspended After Problems.

Powerful and captivating machines are being let loose with no regulatory guardrails.

I worry that we are about to envelope ourselves in an epistemic fog; a veritable pea souper in which navigation becomes permanently difficult and increasingly dangerous. I hope I’m wrong, but ChatGPT hit a million users within a week of being introduced and these powerful and captivating machines are being let loose with no regulatory guardrails to stop their creators or help their users from straying into dangerous territory; no independent oversight; and little to no precautionary principle being exercised by the creators and masters of these mimetic machines. 

Perhaps it sounds dramatic but I believe this new generative form of AI is going to transform digitally entangled societies like ours profoundly.  

A final prediction, therefore. A prediction about how such societies, increasingly dependent on the kinds of machine envisaged by Forster or Čapek, will have to adapt and adjust if we are to avoid machine mediated myopia

Seeing through the fog

Besides the aforementioned and urgently needed regulatory guardrails, I foresee two other responses that will help societies cope with this rapidly enveloping epistemic fog. First stronger tools for transparency and verification. Secondly, better education for digital literacy and digital habits that protect and enhance a healthy soul. 

First, then, transparency and verification. The EU’s new AI Bill will require companies to notify users whenever they interact with an artificial agent. Between the technology of deepfakes and game playing bots like Meta’s Cicero, we have already surpassed the Turing test in increasingly broad areas of human machine interaction. But I anticipate a further shift in emphasis from ‘explainability’ - how any algorithm works per se - toward transparency – how it impacts and influences both individual users and society emergently. We need more publicly accessible evaluation of the holistic if unintended effects of our machines even now. That need is only going to grow.  

The fundamental question of transparency “who, or what is really in view here?” is going to take centre stage. 

One consequence may well be an increasingly fraught battle between, on the one hand, commercial intellectual property (IP) rights, and, on the other, individual rights and the common good. With the notable exception of sites like Wikipedia society has so far struggled painfully and inconsistently with the challenges of effective content moderation – especially where values rather than empirical facts are concerned. Until now, and to pick just one example; Facebook’s secretive behaviour and cherry picked transparency metrics have wilfully kept both customers and regulators in the dark. The idea that we can mechanise or automate by outsourcing intrinsically value-laden problems to algorithms, however mimetic the surface results, is patently utopian. Continuing to withhold evidence of biases and harms from generative deepfakery using AI can only invite a steeper descent towards dystopia. And as generative AI combines with increasingly convincing deepfake technology to fool every human sense the fundamental question of transparency “who, or what is really in view here?” is going to take centre stage with increasing importance.  

A veracity FAQ

Veracity will take on increasing scope as well as importance. Soon not just the ‘facts’ of a matter but equally basic questions like “who (or what?) is saying this?”, “why is this being said?” and “what are the consequences (holistically) of saying this?” will become central to deciding “is this true?” We are now in a situation where truth and fiction can be opaquely intermixed by machines autonomously at a pace and a scale, but also at a quality, that will overwhelm any fact-checking of the kind we deploy now. Proving our identity - including the basic fact that we are human, and protecting ourselves not merely from susceptibly to fakes but being faked will become increasingly important and will therefore become central tasks of the next web.   

Clearly there is a role for government here; a need for clear regulation, strong inspection and enforcement mechanisms, and an effective precautionary principle that ensures new techniques and new machines are only let loose in ways that have proven demonstrably safe. There will a role too for (new?) trustworthy bodies and institutions as fact-checkers and as repositories of verified content. New institutions as well as new technologies like https://datatrusts.uk/ are a helpful early response. 

Lastly, new demands and new digital habits will be needed by each one of us. The ancients associated a healthy soul with good habits but we are still at a formative stage of learning – and teaching one another – even healthy digital etiquette, let alone the digital habits and behaviours to keep humans safe and able to thrive as fully rounded souls navigating a world created for us by powerfully mimetic but deceptively soulless machinery. 

It won’t be easy. As Forster and others perceptively show, the machinery of modern life invites our souls towards decadence. Self-control is not in vogue. But the ancients have long associated the good life with cultivating character; with generosity, moderation, and self-less-ness as the only route to becoming truly whole.