Review
Culture
Film & TV
7 min read

Help for the Heelers

The benevolent butterfly effect in Bluey’s season finale.

Mockingbird connects the Christian faith with the realities of everyday life.

A cartoon dog family stand in their kitchen
Bluey and family.
Ludo Studio.

Written by Bryan J. This article first appeared in Mockingbird, 23 April 2024. By kind permission.

“Daddy, there’s no God in Bluey’s world, is there?” No joke, my precocious four-year-old PK [Pastor’s Kid] son asked me this question after watching the new Bluey season finale, titled “The Sign.” It’s a deep question, one that comes from a place of honest curiosity — “The Sign” is, without a doubt, one of the most spiritually significant episodes of a series that is known to offer big questions to little kids. The Heeler family is presented with a life-changing decision with no guarantee of a happy ending, and the whole of the episode features the family wrestling the unknown of the future. The fact that my son could pick up on the high stakes of the episode, the philosophical questions about the goodness of providence, and the impossibility of knowing whether the future was bright or not … let’s just say it justifies the tears that come every time I watch the episode with him. 

Which is five times now. I have watched the finale five times and wept every time. Isn’t this show made for preschoolers? 

“The Sign” reveals a family in transition. After offering a number of hints earlier in the season, we discover that the Heeler family dad, Bandit, has accepted a job offer that pays a lot more money, but will require him to move. Chili (aka Mum) agrees with the choice to move and take the new job, but she has sincere concerns about leaving family, friends, neighbors, city, and a beloved house behind. Bluey, of course, has trouble coming to terms with the idea, as any preschooler would, and Bingo remains blissfully ignorant of the big changes coming her way. The preparations to move coincide with preparations for Uncle Rad’s marriage to family friend (and Bluey’s godmother) Frisky, with the family’s four preschool girls joining in as flower girls. 

Big changes bring big questions, of course. Nowhere is this more evident than in Bluey’s preschool classroom. Calypso, the teacher, models a zen-like spirituality for Bluey and her friends. At the end of story time, Bluey asks her teacher “Why do stories always have happy endings?” Her teacher responds, “Well, I guess ’cause life will give us enough sad ones.” (Is this a kid’s show?) This inspires a host of sad stories from Bluey’s peers: a guinea pig that ran away, a divorce, a lonely dad. It’s now that Bluey announces her not-so-happy ending, telling her friends that she is moving. To help the class cope with their sadness, Calypso reads her students a parable of a farmer, who approaches all of life’s good luck and bad luck moments with the simple attitude of “we’ll see.” It’s a story without a happy ending (or without any ending, really), and the kids don’t buy it. “Is that it?” asks one, disappointed. “What happens next?” asks another. “Everything will work out the way it’s supposed to, Bluey” says Calypso with kindness, which Bluey takes to mean that their family house won’t sell (which it does, in the next scene). 

Big questions also mean big feelings. An enraged and fearful Frisky, dismayed to discover her fiancé expects her to move out west where he works, runs away the morning of the wedding. Chili hops in the car, with four preschoolers in tow, to track her down. A series of events, which can be only described as providential, take place along the way. Chili, Bluey, Bingo, and cousins run into just the right person, spill juice cups at just the right moment, and make pit stops at the precisely needed spot, to find Frisky in quiet reflection at a local hilltop park. Groom-to-be Rad shows up, too. Frisky and Rad talk through their concerns and move on with the wedding, announcing there that they’ve chosen not to move west as planned. It’s a lovely wedding, with dancing and family and fun and a host of easter eggs for eagle-eyed viewers to enjoy. “You’re having a happy ending!” announces Bluey to her godmother, before turning to her mother and asking “Do you think we’ll have a happy ending too?” “I don’t know,” replies Chili, “But I’m done trying to figure it out. I just wanna dance.” Queue the happy dance montage. 

Still, providence has not finished working with the Heelers. On moving day, a whole host of minor events from previous episodes collide to cancel the sale of the Heeler’s beloved house. It’s hard to describe every little flap of the butterfly’s wing that impacted this outcome — a combination of stuck coins, romantic encounters at the drugstore, inchworms saved from being squished on the slip-and-slide, and overzealous real estate agents all played their part. In a moving montage, Bandit takes the call about the canceled sale of the home, changes his mind about the new job, symbolically rips the for-sale sign out of his front yard, and is tackled by a loving family who realize they don’t have to move anymore. The family sits on boxes in their empty kitchen floor eating takeout cheeseburgers, relieved of the anxiety of moving, while the show rolls to credits. The song playing in the background is called “Lazarus Drug,” sung by the same voice actor who plays preschool teacher Calypso. It’s a song about love drawing someone back to life, perhaps a nod to the love of Bandit’s family drawing him back to the reality that they may already have a great life, and money wouldn’t make it any better. The Heelers get their happy ending, too. 

The Greeks were the first to use the storytelling tool we know by its Latin name: deus ex machina, God from the machine. In Greek drama, at the climax of a seemingly unsolvable problem, a machine (usually a trap door or crane) would lift or lower an actor onto the stage portraying one of the gods of the Greek pantheon. These gods would step in and provide a solution to a drama’s seemingly unsolvable and complex problem. Nowadays, the term is derisive, an insult that implies lazy writing or poor storytelling. At the time, however, the audience loved these deus ex machina solutions. At the risk of psychoanalyzing the past, one imagines they would have been quite happy to imagine that the Gods cared enough in the affairs of humans to intervene for a happy ending. 

Deus ex machina is a criticism that can be leveled at this season finale. After all the adults tell the precocious preschooler that life gives out happy and sad endings, we are not given any sad endings. The only way to navigate change, according to the wisdom of the world, is sit back, embrace a sort of desireless “zen” regarding the future, and say “we’ll see,” but everyone nonetheless gets a happy ending. After bending over backwards to lay out how the future is fickle and unknowable, the show still insists on showing how everything lined up just perfectly for Bluey’s “prayers” to be answered. It’s not just her either. Aunt Brandy’s desire for a child comes to fulfillment, after we are told numerous times that it is not meant to be (S3E31). Winton’s divorced and depressed father meets the mother of the terrier triplets (S3E45), and the two come together and form a new family. The shaggy hair dogs get their house with a pool. Everything works out just fine. Despite the look of a Greek tragedy, in which everything ends poorly for the protagonists, things end up turning out fine, just like every Greek comedy. Or, to put it in Elizabethan terms, what starts out like Hamlet becomes A Mid-Summer Night’s Dream. 

In Bluey’s world, the happy endings are real. The parents always muster enough energy to play with their kids. The right parenting lesson is always on hand, and handed down with pithy aphorisms. Hurt feelings are acknowledged and reconciled with emotionally intelligent strategies. The love shown between friendships and family members is realistic and optimistic. Moreover, in this square dog world of Brisbane, Australia, when parents and grandparents are at the end of their ropes, providence steps in to help guide the way. Happy endings are not so much earned in Bluey’s world as they are a given, or perhaps gifted, sometimes by tired and exhausted parents, but also, by an unseen benevolence watching over them. What is grace, after all, if not an unexpected happy ending? 

So how did I respond to my son’s question? “Yes,” I told him, “there is a God in Bluey’s world. Who do you think made all those happy endings come true?” It’s not an answer I should have come up with so quickly. I’m not usually one to offer a succinct one-liner that sums up decades of media study and theology in a bite sized nugget for my four-year-old. Perhaps, instead, it was providence that gave the answer for me. 

Article
AI
Culture
Generosity
Psychology
Virtues
5 min read

AI will never codify the unruly instructions that make us human

The many exceptions to the rules are what make us human.
A desperate man wearing 18th century clothes holds candlesticks
Jean Valjean and the candlesticks, in Les Misérables.

On average, students with surnames beginning in the letters A-E get higher grades than those who come later in the alphabet. Good looking people get more favourable divorce settlements through the courts, and higher payouts for damages. Tall people are more likely to get promoted than their shorter colleagues, and judges give out harsher sentences just before lunch. It is clear that human judgement is problematically biased – sometimes with significant consequences. 

But imagine you were on the receiving end of such treatment, and wanted to appeal your overly harsh sentence, your unfair court settlement or your punitive essay grade: is Artificial Intelligence the answer? Is AI intelligent enough to review the evidence, consider the rules, ignore human vagaries, and issue an impartial, more sophisticated outcome?  

In many cases, the short answer is yes. Conveniently, AI can review 50 CVs, conduct 50 “chatbot” style interviews, and identify which candidates best fit the criteria for promotion. But is the short and convenient answer always what we want? In their recent publication, As If Human: Ethics and Artificial Intelligence, Nigel Shadbolt and Roger Hampson discuss research which shows that, if wrongly condemned to be shot by a military court but given one last appeal, most people would prefer to appeal in person to a human judge than have the facts of their case reviewed by an AI computer. Likewise, terminally ill patients indicate a preference for doctor’s opinions over computer calculations on when to withdraw life sustaining treatment, even though a computer has a higher predictive power to judge when someone’s life might be coming to an end. This preference may seem counterintuitive, but apparently the cold impartiality—and at times, the impenetrability—of machine logic might work for promotions, but fails to satisfy the desire for human dignity when it comes to matters of life and death.  

In addition, Shadbolt and Hampson make the point that AI is actually much less intelligent than many of us tend to think. An AI machine can be instructed to apply certain rules to decision making and can apply those rules even in quite complex situations, but the determination of those rules can only happen in one of two ways: either the rules must be invented or predetermined by whoever programmes the machine, or the rules must be observable to a “Large Language Model” AI when it scrapes the internet to observe common and typical aspects of human behaviour.  

The former option, deciding the rules in advance, is by no means straightforward. Humans abide by a complex web of intersecting ethical codes, often slipping seamlessly between utilitarianism (what achieves the most amount of good for the most amount of people?) virtue ethics (what makes me a good person?) and theological or deontological ideas (what does God or wider society expect me to do?) This complexity, as Shadbolt and Hampson observe, means that: 

“Contemporary intellectual discourse has not even the beginnings of an agreed universal basis for notions of good and evil, or right and wrong.”  

The solution might be option two – to ask AI to do a data scrape of human behaviour and use its superior processing power to determine if there actually is some sort of universal basis to our ethical codes, perhaps one that humanity hasn’t noticed yet. For example, you might instruct a large language model AI to find 1,000,000 instances of a particular pro-social act, such as generous giving, and from that to determine a universal set of rules for what counts as generosity. This is an experiment that has not yet been done, probably because it is unlikely to yield satisfactory results. After all, what is real generosity? Isn’t the truly generous person one who makes a generous gesture even when it is not socially appropriate to do so? The rule of real generosity is that it breaks the rules.  

Generosity is not the only human virtue which defies being codified – mercy falls at exactly the same hurdle. AI can never learn to be merciful, because showing mercy involves breaking a rule without having a different rule or sufficient cause to tell it to do so. Stealing is wrong, this is a rule we almost all learn from childhood. But in the famous opening to Les Misérables, Jean Valjean, a destitute convict, steals some silverware from Bishop Myriel who has provided him with hospitality. Valjean is soon caught by the police and faces a lifetime of imprisonment and forced labour for his crime. Yet the Bishop shows him mercy, falsely informing the police that the silverware was a gift and even adding two further candlesticks to the swag. Stealing is, objectively, still wrong, but the rule is temporarily suspended, or superseded, by the bishop’s wholly unruly act of mercy.   

Teaching his followers one day, Jesus stunned the crowd with a catalogue of unruly instructions. He said, “Give to everyone who asks of you,” and “Love your enemies” and “Do good to those who hate you.” The Gospel writers record that the crowd were amazed, astonished, even panicked! These were rules that challenged many assumptions about the “right” way to live – many of the social and religious “rules” of the day. And Jesus modelled this unruly way of life too – actively healing people on the designated day of rest, dining with social outcasts and having contact with those who had “unclean” illnesses such as leprosy. Overall, the message of Jesus was loud and clear, people matter more than rules.  

AI will never understand this, because to an AI people don’t actually exist, only rules exist. Rules can be programmed in manually or extracted from a data scrape, and one rule can be superseded by another rule, but beyond that a rule can never just be illogically or irrationally broken by a machine. Put more simply, AI can show us in a simplistic way what fairness ought to look like and can protect a judge from being punitive just because they are a bit hungry. There are many positive applications to the use of AI in overcoming humanity’s unconscious and illogical biases. But at the end of the day, only a human can look Jean Valjean in the eye and say, “Here, take these candlesticks too.”   

Celebrate our 2nd birthday!

Since Spring 2023, our readers have enjoyed over 1,000 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?

Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief