Explainer
AI
Belief
Creed
5 min read

Whether it's AI or us, it's OK to be ignorant

Our search for answers begins by recognising that we don’t have them.

Simon Walters is Curate at Holy Trinity Huddersfield.

A street sticker displays multiple lines reading 'and then?'
Stephen Harlan on Unsplash.

When was the last time you admitted you didn’t know something? I don’t say it as much as I ought to. I’ve certainly felt the consequences of admitting ignorance – of being ridiculed for being entirely unaware of a pop culture reference, of being found out that I wasn’t paying as close attention to what my partner was saying as she expected. In a hyper-connected age when the wealth of human knowledge is at our fingertips, ignorance can hardly be viewed as a virtue. 

A recent study on the development of artificial intelligence holds out more hope for the value of admitting our ignorance than we might have previously imagined. Despite wide-spread hype and fearmongering about the perils of AI, our current models are in many ways developed in similar ways to how an animal is trained. An AI system such as ChatGPT might have access to unimaginable amounts of information, but it requires training by humans on what information is valuable or not, whether it has appropriately understood the request it has received, and whether its answer is correct. The idea is that human feedback helps the AI to hone its model through positive feedback for correct answers, and negative feedback for incorrect answers, so that it keeps whatever method led to positive feedback and changes whatever method led to negative feedback. It really isn’t that far away from how animals are trained. 

However, a problem has emerged. AI systems have become adept at giving coherent and convincing sounding answers that are entirely incorrect. How has this happened? 

This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. 

In digging into the training method for AI, the researchers found that the humans training the AI flagged answers of “I don’t know” as unsatisfactory. On one level this makes sense. The whole purpose of these systems is to provide answers, after all. But rather than causing the AI to return and rethink its data, it instead developed increasingly convincing answers that were not true whatsoever, to the point where the human supervisors didn’t flag sufficiently convincing answers as wrong because they themselves didn’t realise that they were wrong. The result is that “the more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.” 

Uncovering some of what is going on in AI systems dispels both the fervent hype that artificial intelligence might be our saviour, and the deep fear that it might be our societal downfall. This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. Whether it is used for good or ill depends on the approach of the humans that use it. 

But this study also uncovers our strained relationship with ignorance. Problems arise in the answers given by systems like ChatGPT because a convincing answer is valued more than admitting ignorance, even if the convincing answer is not at all correct. Because the AI has been trained to avoid admitting it doesn’t know something, all of its answers are less reliable, even the ones that are actually correct.  

This is not a problem limited to artificial intelligence. I had a friend who seemed incapable of admitting that he didn’t know something, and whenever he was corrected by someone else, he would make it sound like his first answer was actually the correct one, rather than whatever he had said. I don’t know how aware he was that he did this, but the result was that I didn’t particularly trust whatever he said to be correct. Paradoxically, had he admitted his ignorance more readily, I would have believed him to be less ignorant. 

It is strange that admitting ignorance is so avoided. After all, it is in many ways our default state. No one faults a baby or a child for not knowing things. If anything, we expect ignorance to be a fuel for curiosity. Our search for answers begins in the recognition that we don’t have them. And in an age where approximately 500 hours of video is uploaded to YouTube every minute, the sum of what we don’t know must by necessity be vastly greater than all that we do know. What any one of us can know is only a small fraction of all there is to know. 

Crucially, admitting we do not know everything is not the same as saying that we do not know anything

One of the gifts of Christian theology is an ability to recognize what it is that makes us human. One of these things is the fact that any created thing is, by definition, limited. God alone is the only one who can be described by the ‘omnis’. He is omnipotent, omnipresent, and omniscient. There is no limit to his power, and presence, and knowledge. The distinction between creator and creation means that created things have limits to their power, presence, and knowledge. We cannot do whatever we want. We cannot be everywhere at the same time. And we cannot know everything there is to be known.  

Projecting infinite knowledge is essentially claiming to be God. Admitting our ignorance is therefore merely recognizing our nature as created beings, acknowledging to one another that we are not God and therefore cannot know everything. But, crucially, admitting we do not know everything is not the same as saying that we do not know anything. Our God-given nature is one of discovery and learning. I sometimes like to imagine God’s delight in our discovery of some previously unknown facet of his creation, as he gets to share with us in all that he has made. Perhaps what really matters is what we do with our ignorance. Will we simply remain satisfied not to know, or will it turn us outwards to delight in the new things that lie behind every corner? 

For the developers of ChatGPT and the like, there is also a reminder here that we ought not to expect AI to take on the attributes of God. AI used well in the hands of humans may yet do extraordinary things for us, but it will not truly be able to do anything, be everywhere, or know everything. Perhaps if it was trained to say ‘I don’t know’ a little more, we might all learn a little more about the nature of the world God has made. 

Article
AI
Character
Culture
Digital
7 min read

Apple’s AI ads show how we can lose our moral skills

Apple Intelligence promises to safeguard us from the worst of ourselves.

Jenny is training to be a priest. She holds a PhD in law and writes at the intersection of law, politics and theology.

A worker at a desk sits back contemplating a situation
Dour Dale contemplates AI.
Apple.

“I got through the three stages of the interview process, and they said I had done well, but they aren’t hiring any computer science graduates anymore. AI is cheaper, and faster.”

John*, a bright 24-year-old coder and philosopher, has just completed an MSc in Computer Science from one of the top universities in the UK. And he can’t find a job. AI has outcompeted him. In a couple of years, he says, entry level into computer science as a field will require a PhD. What about in ten years, or twenty? Will the only people able to work in the field have to effectively be geniuses to keep up with a technology that’s metastasizing at the rate of knots? It felt painfully ironic to be discussing over coffee the death of an entire sector of meaningful jobs less than a week after the new Labour government announced its plans to “turbocharge” AI (Artificial Intelligence) as the saviour of the nation’s economy. What are we willing to sacrifice in the name of “national renewal”?  

As worrying as John’s story is, there is much more than jobs – and the skills, knowledge and social relations tied up in them – on the line when it comes to AI. The alleged saviour of the nation’s economy is after your soul as well, it turns out.  

This came home to me starkly over the Christmas holidays with the new advertisements for Apple Intelligence tools on MacBook Pro. In the first ad, “Lazy Lance” – a procrastinating business professional – sheepishly shifts in his seat. He has been asked to make a presentation on the new business prospectus, and he has been caught out, unprepared. But he is saved at the last moment. The click of the “Key Points” button using the new Apple Intelligence software on his MacBook Pro provides him with the critical breakdown summary needed to avoid becoming the pariah of the team. The sheepish shifting turns to smug smile: his substandard performance has evaded detection with the ready aid of Apple Intelligence.  

In the second ad, “Dour Dale” – a disgruntled office worker – writes a scathing email to the “monster” who has devoured his pudding from the communal fridge. Before clicking send on this missive, he raises his eyes from the raging words on his screen to see a pious teddy bear holding a love-heart which says “find your kindness.” This moral cue from a cuddly toy prompts Dave to select the “Friendly” button from the dropdown list on Apple Intelligence writing tools, which immediately converts his childish strop over pudding thievery into a mature response in which he kindly expresses his disappointment along with a polite request for the pudding to be returned. The only moral effort required of Dale is the click of a button; Apple Intelligence sorts out the bile and the blame and re-presents his pudding fury in a professionally palatable manner.  

These advertisements for AI tools are designed to provoke an empathetic laugh. Who indeed can honestly say they have never arrived unprepared to a meeting, or at least mentally penned a vindictive response to the tiniest office slight?  

AI is poised to strike at the root of our individual virtue, by inserting itself as an emotional regulator. 

However, underneath the easy laughs, I felt a profound sense of dis-ease when watching them. They indicate just how far AI has already begun to penetrate our moral economy. By inserting a technological tool to disguise or translate social interactions into new terms, our moral relations with each other are deceptively smoothed to avoid the social and personal costs of shame (e.g. Lance using “Key Points” rather than owning up to his poor work ethic) and anger (e.g. Dale using “Friendly” mode to transform his email from raging diatribe into courteous appeal). As appealing as it sounds to have automatic tech weapons to tranquilise social and emotional bugbears, they also remove daily opportunities to learn how to live and work together.  

For example, as excruciating as it is to be the person who came to the meeting woefully under-prepared, embarrassment can be a very useful corrective in learning the art of time management as well as the virtue of pulling our weight. We probably all know from school what it feels like to work on a group project, when only half the group cares about the outcome. If we do not learn moral skills of responsibility and accountability in our formative years, the workplace becomes a vital school for virtue in adulthood where we learn what it means to be trusted and how to be worthy of it. As in the case of Lance, AI now offers us everyday tools which help us to avoid embarrassment and effectively hide our lack of effort, taking the edge off of the very exposure that would help us to grow in both skill and trustworthiness. This is not propaganda for the Protestant work ethic but rather a top survival tip for the human soul in hyper-capitalist economy. Maintaining the moral significance of our labour as a school of formation in self-respect and trustworthiness does not baptise the extractive and exploitative nature of many workplaces. Rather, it offers a means of resistance to the soul-destroying idea that we are all replaceable, that nothing really matters and that our efforts are simply grist for the eternal and insatiable mill of market supply and demand.

In the case of Dale, Apple Intelligence goes beyond protecting users from social shame: it promises to safeguard us from the worst of ourselves. Of the two Apple Intelligence advertisements, I find Dale’s to be even more pernicious because it evidences how AI is poised to strike at the root of our individual virtue, by inserting itself as an emotional regulator. Rather than doing the difficult work of redrafting the email himself, which would require Dale to critically examine his own reactions and put himself into the shoes of the recipient, Apple Intelligence offers to do it automatically. By short-circuiting Dale’s process of recognising the emotions underneath his rage, he misses a critical opportunity to learn for himself what his anger is all about, and even more than that, to practice the art of genuine self-mastery in conflict. The AI tool smooths out the conflict on the surface, while Dale is presumably left with all those rotten feelings built up and unprocessed, because he has not had to do the difficult work of converting his aggressive monologue into a respectful dialogue with another human being.

The insertion of these seemingly innocuous AI tools into the spheres of our everyday, workaday lives introduces new means and modes of (self) deception in our habits, where we are able to hide much more easily from honest moral evaluation of the quality of our work as well as our interpersonal relationships. It also risks new heights of moral “de-skilling” over time as we live in a social and economic world that has become so deeply mediated by technology, to the point where we may very well eventually trust Apple as the gold standard of professional behaviour rather than our own discernment. The soul – our very interiority – is the new frontier of economic expansion, in the name of securing Britain’s place in the ranks of global competitiveness.

To AI enthusiasts, all this may sound like Luddite naysaying. Many people find AI tools helpful in the process of research and preparation. Even some priests, I have recently discovered, use Chat GPT to aid sermon-writing. And what, as a priest friend asked me recently, is the problem with these time-saving tools, as long as we use them critically?

Apart from the obvious answer that AI can’t be trusted to get all the facts right, let alone the word of God, this question presumes that human beings’ critical faculties and moral compasses remain fundamentally unaffected by these new technologies. It may be true for older generations (whose formative years occurred well before the meteoric surge of digital technology in the early 2000s) that technology continues to function as an optional extra to make life that little bit easier. But for Gen Z and below, and even for some younger millennials, intuitive digital technologies have become so fused with the ways that we learn and process information that it is no longer – if it ever was – a neutral tool to improve our lives. We are only learning now about the extent to which social media has thoroughly penetrated the emotional worlds of teenagers, with severe consequences for their wellbeing. What will be the consequences for the generations to come, when AI becomes so integrated into the emotional and social fabric of our lives that we cannot quite tell where we start and it begins? The risk with “turbocharging” AI is not only a huge number of jobs, but the atrophy of our moral muscles as AI encroaches further into the heartlands of what it means to be human. While a few tech elites may always stay one step ahead of AI and keep it safely in the toolbox rather than the driver’s seat, most of us time-poor plebians are being taken for the ride of our lives.

 

 *Name changed for anonymity. 

Join with us - Behind the Seen

Seen & Unseen is free for everyone and is made possible through the generosity of our amazing community of supporters.

If you’re enjoying Seen & Unseen, would you consider making a gift towards our work?

Alongside other benefits (book discounts etc.), you’ll receive an extra fortnightly email from me sharing what I’m reading and my reflections on the ideas that are shaping our times.

Graham Tomlin

Editor-in-Chief