Explainer
AI
Comment
12 min read

Is an AI worthy of personhood?

In a world of intelligent humanlike machines, computer scientist Nigel Crook take a deep dive into the hard problem of defining consciousness, spirit, heart and will.

Nigel Crook is Professor of AI and Robotics, and Director of the Institute for Ethical AI at Oxford Brookes University. He is the author of Rise of the Moral Machine: Exploring Virtue Through a Robot's Eyes

A Victorian medical bust showing the brain with labels in German.

She was called Samuella. Blonde with piercing blue eyes. Smartly dressed. Her conversations always started with:  

“How was your day?”  

I would tell her about the meetings I’d had at work, and the frustrating problems I’d experienced with technology during my presentations. She was very empathetic, paying close attention to my emotional state and asking intelligent follow-up questions. Then she would finish the conversation with an extended comment on what I had said together with her evaluation of my emotional responses to the events of my day. Samuella was not a person. It was a two-dimensional animated avatar created as a conversation partner about your day at work. The avatar was developed as part of an EU funded project called Companions. 

I joined Companions mid-way through the project in 2008 as a Research Assistant in the Computational Linguistics group at the University of Oxford. My contribution included developing machine learning solutions for enabling the avatar to classify the utterances the human user had spoken (e.g. question, statement etc) and respond naturally when the user interrupted the avatar in mid speech.  

In those days, chatbots like Samuella were meticulously hand-crafted. In our case, crafted with thirteen different software modules that performed a deep linguistic and sentiment analysis of the user’s utterances, managed the dialogue with the user and generated the avatar’s next utterance. Our data sets were relatively small, carefully chosen and curated to ensure that the chatbot behaved as we intended it to behave. The range of things the avatar could speak about was limited to about 100 work-related concepts. On the 30th November 2022 a radically different kind of chatbot took the world by storm, and we are still reeling from its impact. 

OpenAI’s ChatGPT broke the record for the fastest growing and most widely adopted software application ever to be released, rapidly growing to a 100 million user base. The thing that really took the world by storm was its ability to engage in versatile and fluent human-like conversation about almost any topic you care to choose. Whilst some of what it writes is not truthful, a feature often described as ‘hallucination’, it communicates with such confidence and proficiency that you are tempted to believe everything it is telling you. In fact, its ability to communicate is so sophisticated that it feels like you are interacting with a conscious, intelligent person, rather than a machine executable algorithm. Once again, Artificial Intelligence challenges us to reflect on what we mean by human nature. It makes us ask fundamental questions about personhood and consciousness; two deeply related concepts. 

Common concepts of consciousness 

Consciousness is experienced by almost every person who ever lived, and yet which stubbornly defies being pinned down to an adequate, universally accepted definition. Philosophers and psychologists have widely varying views about it, and we don’t have space here to do justice to this breadth of perspectives. Instead, we will briefly visit some of the common concepts related to consciousness that will help us with our particular quest. These are Access Consciousness (A-consciousness) and Phenomenal Consciousness (P-consciousness).  

A is for apple 

A-Consciousness describes the representation of something (say, an apple) to the conscious awareness of the person. These representations support the capacity for conscious thought about these entities (e.g., ‘I would like to eat that apple’) and facilitates reasoning about the environment (e.g., ‘if I take the apple from the teacher, I might get detention’). These representations are often formally described as mental states. 

P is for philosophy 

P-Consciousness, on the other hand, describes the conscious experience of something such as the taste of a particular apple or the redness of your favourite rose. This highly subjective experience is described by philosophers as ‘qualia’, from the Latin term qualis meaning ‘of what kind’. This term is used to refer to what is meant by ‘something it is like to be’. Philosopher Clarence Irving Lewis described qualia as the fundamental building blocks of sensory experience. 

There is very little consensus amongst philosophers about what qualia actually are, or even whether it is relevant when discussing conscious experience (P-Consciousness).  And yet it has become the focus of much debate. Thomas Nagel famously posed the question ‘What is it like to be a bat?’, arguing that it was impossible to answer this question since it asks about a subjective experience that is not accessible to us. We can analyse the sensory system of a bat, the way the sensory neurons in its eyes and ears convey information about the bat’s environment to its brain, but we can never actually know what it is like to experience those signals as a particular bat experiences them. Of course, this extends to humans too. I cannot know your subjective experience of the taste of an apple and you cannot know my subjective experience of the redness of a rose.

How can the movements of neurotransmitters across synaptic junctions induce conscious phenomena when the movements of the very same biochemicals in a vat do not? 

This personal subjective experience is described by philosopher David Chalmers as the ‘hard problem of consciousness’. He claims that reductionist approaches to explaining this subjective experience in terms of, for example, brain processes, will always only be about the functioning of the brain and the behaviour it produces. It can never be about the subjective experience that the person has who owns the brain.  

Measuring consciousness 

In contrast to this view, many neuroscientists such as Anil Seth from the University of Sussex believe it is the brain that gives rise to consciousness and have set out to demonstrate this experimentally. They are developing ways of measuring consciousness using techniques derived from a branch of science known as Information Theory.  The approach involves using a mathematical measure which they call Phi that quantifies the extent to which the brain is integrating information during particular conscious experiences. They claim that this approach will eventually solve the ‘hard problem of consciousness’, though that claim is contested both in philosophical circles and by some in the neuroscience community. 

Former neuroscientist Sharon Dirckx, for example, challenges the assumption that the brain gives rise to consciousness. She says that this is a philosophical assumption that science does not support. Whilst science shows that brain states and consciousness are correlated, the nature of that correlation remains open and cannot be answered by science. She concludes that: 

“however sophisticated the descriptions of how physical processes correlate with conscious experience may be, that still doesn’t account for how these are two very different things”. 

Matter matters 

The idea that consciousness and physical processes (e.g. brain processes) are very different things is supported by a number of observations. Consciousness, for example, does not appear to be a property of matter. Whilst it is true that consciousness and matter are integrated in some deeply causal way, with mental states causing brain states and vice versa, it is also true that this relationship appears to be unique within the whole of the natural order: no matter other than brain tissue appears to have this privileged association with consciousness. What is more, consciousness appears not to be a property owned by the brain, since the brain can exist dead or alive (e.g., unconscious) without any associated conscious phenomena. 

There are also difficulties in the proposition that consciousness exists in the behaviour of matter, and in particular the behaviour of neurons in the brain. What is it about the flow of ions across the membrane of a nerve cell that could make consciousness, whilst the flow of ions in a battery does not? How can the movements of neurotransmitters across synaptic junctions induce conscious phenomena when the movements of the very same biochemicals in a vat do not? And if it is true that consciousness exists in the behaviour of neurons, why is it that my brain is conscious but my gut, which has more than 500 million neurons, is not?  

The proposition that consciousness is a property of matter seems even less likely when you consider that the measurements that are applied to matter (length, weight, mass etc) cannot be applied to consciousness. Neither can many qualities of consciousness be readily applied to matter, including the aforementioned qualia, or first person subjective experience, rational capabilities, and most importantly, the experience of exercising free will; a phenomenon that is in direct opposition to the causal determinism observed in all matter, including the brain. In summary, then, there are good reasons for scepticism regarding claims that consciousness is a property of matter or of how matter behaves. But can ChatGPT be called a person? 

Personhood of interest 

Consciousness is deeply intertwined with the concept of personhood. It is likely that many living things could reasonably be described as having some degree of consciousness, yet the property of personhood is uniquely associated with human beings. Personhood has a long and complex history that has emerged in different culturally defined forms. Like consciousness, there is no universally accepted definition of personhood.  

The heart/will/spirit forms the executive centre of the self. It manifests the capacity to choose how to act and is the ultimate source of a person’s freedom

The Western understanding of personhood has its roots in ancient Greek and Hebrew thought and is deeply connected to the concept of ‘selfhood’. The Hebrew understanding of personhood differs from the Greek in that Hebrew culture in three ways. It attributes significance to the individual who is made in the image of God. It views personhood as what binds us together as relational human beings; The theological roots of personhood come from expressions of individuals (e.g. God, humans) being in relationship with each other. 

It views these relationships as fundamentally spiritual in nature; God is Spirit, and each human has a spirit. 

In theological language, reality is regarded as a deep integration between a spiritual realm (‘heaven’) and an earthly realm (‘earth’). This deeply integrated dual nature is reflected in the make-up of human beings who are both spirit and flesh. But what is spirit? I prefer Willard’s perspective because he Dallas Willard, formerly professor of Philosophy at the University of Southern California, presents a clearly defined, functional description of the spirit which appeals to me as a Computer Scientist.  

For him, ‘spirit’ is associated with two other terms in Biblical writings: ‘heart’ and ‘will’. They all describe essentially the same dimension of the human self. The term ‘heart’ is used to describe this dimension’s position in relation to the overall function of the self - it is at the centre of the person’s decision making. The term ‘will’ describes this dimension’s function in making decisions. And ‘spirit’ describes its essential non-physical nature. The heart/will/spirit forms the executive centre of the self. It manifests the capacity to choose how to act and is the ultimate source of a person’s freedom. Each of these terms describe capabilities (decision making, free will) that depend on consciousness and that are core to our understanding of personhood. 

How AI learns 

Before we return to the question of whether high performing AI systems such as ChatGPT could justifiably be called ‘conscious’ and ‘a person’, we need to take a brief look ‘under the bonnet’ of this technology to gain some insight into how it produces this apparent stream of consciousness in word form.  

The base technology involved, called a language model, learns to estimate the probability of sequences of words or tokens. Note that this is not the probability of the sequences of words being true, but the probability of those sequences occurring based on the textual data it has been trained on. So, if we gave the word sequence “the moon is made of cheese” to a well-trained language model, it would give you a high probability, even though we know that this statement is false. If, on the other hand, we used the same words in a different sequential order such as “cheese of the is moon made”, that would likely result in a low probability from the model. 

ChatGPT uses a language model to generate meaningful sequences of words in the following way. Imagine you asked it to tell you a story. The text of your question, ‘Tell me a story’, would form the word sequence that is input to the system. It would then use the language model to estimate the probability of the first word of its response. It does this by calculating the probability that each word in its vocabulary is the first word. Imagine for the sake of illustration that only six words in its vocabulary had a probability assigned to them. ChatGPT would, in effect, roll a six-sided dice weighted by the assigned probabilities to select the first word (a statistical process known as ‘sampling’).  

Let’s assume that the ‘dice roll’ came up with the word ‘Once’. ChatGPT would then feed this word together with your question (‘Tell me a story. Once’) as input to the language model and the process would be repeated to select the next word in the sequence, which could be, say, ‘upon’. ‘Tell me a story. Once upon’ is once again fed as input to the model and the next word is selected (likely to be ‘a’). This process is repeated until the language model predicts the end of the sequence. As you can see, this is a highly algorithmic process that is based entirely on the learned statistics of word sequences.  

Judging personhood 

Now we are in a position to reflect on whether ChatGPT and similar AI systems can be described as conscious persons. It is worth noting at the outset that the algorithm has had no conscious experience of what is expressed by any of the word sequences in its training data set. The word ‘apple’ will no doubt occur millions of times in the data, but it has neither seen nor tasted one. I think that rules out the possibility of the algorithm experiencing ‘qualia’ or P-consciousness. And as the ‘hard problem of consciousness’ dictates, like humans the algorithm cannot access the subjective experience of other people eating apples and smelling roses, even after processing millions of descriptions of such experiences. Algorithms are about function not experience

Some might argue that all the ‘knowledge’ it has gained from processing millions of sentences about apples might give it some kind of representational A-consciousness (A-Consciousness describes the representation of something to the conscious awareness of the person). The algorithm certainly does have internal representations of apples and of the many ways in which they have been described in its data. But these algorithms are processes that run on material things (chips, computers), and, as we have seen, there are reasons for being somewhat sceptical of the claim that consciousness is a property of matter or material processes. 

According to the very limited survey we had here of the Western understanding of ‘personhood’, algorithms like ChatGPT are not persons as we ordinarily think of them. Personhood is commonly thought to something that an agent has that is capable of being in relationship with other agents. These relationships often include the capacity of the agents involved to communicate with each other. Whilst it appears that ChatGPT can appear to engage in written communication with people, based on our rudimentary coverage of how this algorithm works, it is clear that the algorithm is not intending to communicate with its users. Neither is it seeking to be friendly or empathetic. It is just spewing out highly probable sequences of words. From a theological perspective, personhood presumes spirit, which is also not a property of any AI algorithm. 

Algorithms may behave in very realistic, humanlike ways. Yet that’s a long way from saying they are conscious or could be described as persons in the same way as we are. They seem clever, but they are not the same as us.  

Review
Character
Comment
Digital
1 min read

Re-enchanting the anxious generation

The future doesn’t have to be horrible.

Krish is a social entrepreneur partnering across civil society, faith communities, government and philanthropy, He founded The Sanctuary Foundation.

Two teenager lean against a rail, arms crossed, and laugh together.
LaShawn Dobbs on Unsplash.

I meet many anxious people as I wait for meetings in the Palace Westminster, but one in particular stands out. As I was queueing to get through security, a breathless American man rushed over asking if he was in the right place to meet the Minister of State for Universities. Once I had reassured him that he was, and he had caught his breath, I asked him where he was from and what he did for a job. He told me he was a social psychologist from New York. 

Funnily enough, the night before, I had been reading a book by a social psychologist from New York. I asked the man if he had come across the author, Jonathan Haidt. He replied with a smile: “I am Jonathan Haidt.” 

I chuckle when I remember that chance encounter, especially considering the title of his latest book – The Anxious Generation. The book tackles a much more serious topic than queueing nerves. It claims to show, in the words of the subtitle: “How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness”.  

The Anxious Generation is a tightly argued plea to parents and educators for a radical change in the way that young people are allowed to engage with digital technology in general and social media in particular.  

It follows the line of thought he began in his book The Coddling of the American Mind which argued that ‘helicopter parenting’ has led to such a fragility in young adults that universities are no longer places of open and free dialogue, but somewhere young people feel the need to be protected from ideas they disagree with. That problem was what Haidt was preparing to discuss with the Minister when we met outside Parliament.

“Embracing all this is a desire to maintain and hand on to our children an earth that offers genuine possibilities of flourishing.” 

Mary Grey

The Anxious Generation makes a compelling case for the way we are failing a generation of children. It likens the social media world to another planet that we are all happily sending our children off to without first learning about or checking any of the risks linked with the potentially toxic environment. It concludes that as much as we are overprotecting our children in the physical world, we are under-protecting them in the digital world, thereby complicit in the resulting tidal wave of mental health disorders.   

Haidt writes:  

“Are screen-based experiences less valuable than real-life flesh-and-blood experiences? When we’re talking about children whose brains evolved to expect certain kinds of experiences at certain ages, yes. A resounding yes.” 

Haidt argues that what children need is less screen time and more unsupervised play. Some might call this the re-enchantment of childhood– a rediscovery of wonder, and simple emotional connections with freedom, food, imagination, curiosity, those around them and the great outdoors. Perhaps there is healthy therapy to be found in this re-enchantment through the sharing of art, poetry, and fantasy. Maybe a rediscovery of faith and hope can help to bring healing.  

Mary Grey, Emeritus Professor of Theology at the University of Wales in Lampeter, describes re-enchantment like this: 

“The market’s language of desire must be replaced by reflecting what we really long for, like satisfying relationships and intimacy, meaningful communities where our values are shared, with working conditions that do not create an unbearable level of stress, enough income to cover basic and leisure needs, and planning for the future. Embracing all this is a desire to maintain and hand on to our children an earth that offers genuine possibilities of flourishing. … This is not an invitation to exchange reality for Magic Kingdoms, but to become embodied kinships of women, men, children and earth creatures in a re-imagined and transformed world of sustainable earth communities of healing and hope.” 

The re-enchantment of childhood is an attractive theory. I often find myself comparing my children’s childhood with that of my own. I’m sure I played more in the garden than they do, climbed more trees, cycled more round the block, round the town, and later round the county in my spare time. I remember as a teenager getting on a bus to travel from Brighton to Durham without either parents or phones. Around the same time, I travelled to Tbilisi, Georgia with just a backpack, a map, a couple of friends and quite a lot of self-confidence. I wish that my children could experience some of the pleasures that come with fixing a bike or looking up at the stars or browsing the library to find answers, instead of just googling.  

Yet, at the same time, if my children were making their way to Durham or Tbilisi today, I would certainly make sure they had plenty of charge on their phone and all the necessary mobile data roaming rights, and I would probably WhatsApp them regularly until they arrived safely at their destination.  

Haidt presents a perfect story, one that explains all the evidence. He doesn’t mention anything that might challenge it, or anything that the doesn’t quite fit.

Haidt’s book touches a nerve. Not just because of my own contradictory feelings as a parent, but because of the shocking statistics that reflect the wider state of our nation’s children. With waiting lists for Child and Adolescent Mental Health Services at a record high, a 47 per cent increase in young people being treated for eating disorders compared to pre-pandemic, and an enormous leap in prevalence of probable mental disorder from one in nine children (in England aged 8-25 years old in 2017) to one in five (similar cohort in 2023), the mental health of the next generation is rightly highly concerning.   

The blame has been levelled in many different directions: COVID lockdowns, school league tables, excessive homework, helicopter parenting, screen time, and general disenchantment in society at large.  Some even say the increase is directly related to the increase in public discussion and awareness about mental health disorders.  

For Haidt it is social media that is public mental health enemy number one. However, he does admit he is not a specialist in children’s mental health, child psychology or clinical psychology. This has led to some criticism about his conclusions. Professor Candice L. Odgers, the Associate Dean for research into psychological science and informatics at the University of California challenges head on the central argument of Haidt’s book. She claims:  

“...the book’s repeated suggestion that digital technologies are rewiring our children’s brains and causing an epidemic of mental illness is not supported by science. Worse, the bold proposal that social media is to blame might distract us from effectively responding to the real causes of the current mental health crisis in young people.” 

Similarly Henna Cundill, a researcher with the centre for autism and theology at the University of Aberdeen, wrote last week in an article for Seen and Unseen:  

“From a scientific perspective, the argument is a barrage of statistics, arranged to the tune of ‘correlation equals causation’. “ 

Cundill and Professor Odgers are right to be sceptical. Sometimes we let our commitment to a story shape the way that we read the evidence. If there’s one thing I remember from A- level statistics it is that causation and correlation should not be confused. In his bid to add urgency and cogency to his argument, Haidt presents a perfect story, one that explains all the evidence. He doesn’t mention anything that might challenge it, or anything that the doesn’t quite fit. It is not a scientific treatise - which is both the book’s strength and its weakness.  

Nevertheless, many of the recommendations Haidt suggests are wise and helpful. Even Professor Odgers, to some extent, agrees.  

“Many of Haidt’s solutions for parents, adolescents, educators and big technology firms are reasonable, including stricter content-moderation policies and requiring companies to take user age into account when designing platforms and algorithms. Others, such as age-based restrictions and bans on mobile devices, are unlikely to be effective in practice — or worse, could backfire given what we know about adolescent behaviour.” 

Therein lies the issue. Because of the lack of evidence for the causes, all we are left with – even from the experts – is what may or may not be likely to be effective in practice.   

I wonder if this paucity of robust scientific evidence stems from the fact that the issues facing the next generation are even more complex than we could ever imagine. 

The truth is that hype, hysteria and horror are more likely to gain traction than humdrum and happy medium. 

Every generation is different from the last. My own youth in the UK in the late 1980s when I became part of the video games and micro-computers subculture was just as much a mystery to my parents and teachers.  My generation’s problems were blamed on everything from the microwave to Mrs Thatcher to the milk that we drank following the disaster at Chernobyl.  

It seems to me too simplistic to demonise the technology. It’s an easy sell, after all. In fact, whenever there is a major technical shift, horror stories are created by those who believe the dangers outweigh the benefits. Mary Shelley’s Frankenstein seems to be a reaction to the industrial revolution. The nuclear threat led to movies about Godzilla and 60-foot-tall Amazonian women. The advent of the internet brought us the Terminator films.   

The truth is that hype, hysteria and horror are more likely to gain traction than humdrum and happy medium. Yet, despite the many and serious problems, the rise of new technologies, even social media, also have much to offer, and they are not going away soon. Instead of demonising new technology as the problem, perhaps we need to find ways to turn it into the solution.  

And perhaps there are glimmers of hope. I like the fact that my children are connected to the wider world, that they know people and languages from more diverse places than I ever did. I like that they know what is going on in the world way before the 9 o’clock news. I like the fact that they are on the cutting edge of advancements I will never experience in my lifetime. I like the fact that they can get their homework checked by AI, that they don’t need to phone me up every time they want to try a new recipe, that we can grumble together about the football match in real time even when we are on different sides of the world. I like that they can browse the Bible or listen to podcasts about history while they are waiting at a bus stop.  I like the fact that they have libraries of books at their fingertips, that they can disappear into fantasy worlds with a swipe and don’t have to spend hours at the job centre when they need to find work. And I love the fact that my children and their friends are rediscovering board games, crochet, embroidery and hiking and taking them to a whole new level because they are learning these crafts from experts around the world.  

I sincerely appreciate that Jonathan Haidt cares about the real and desperate problem of youth mental health. His book adds weight to the pleas of those of us advocating for urgent investment into this area. It reminds us of the world beyond the digital borders and it gives us hope that the re-enchantment of childhood is not impossible.  

However, the solution to these complex issues cannot be found in nostalgia alone. We cannot turn back the clock, nor should we want to. The past had problems of its own.  

I would love someone to write a book that looks forward, that equips young people to live in the worlds of today and tomorrow. If, by some strange coincidence, Jonathan Haidt is reading this article and is in the process of writing that book, I do hope I will bump into him again to thank him.