[NOTE: This is the next installment in my series of reviews of Black Mirror. These posts DO include spoilers. Stop reading now if you don’t want to know!]


Back in 2013, when the second season of Black Mirror was released, I was only just beginning to expand what was, at the time, a somewhat niche-market interest in the tech phenomenon known as the “Uncanny Valley” into a broader research project focused on emergent technologies and human values. I had already written a series of posts on this blog about Japanese roboticist Masahiro Mori‘s brilliant (and prescient) “Uncanny Valley” hypothesis, so I was looking to find a way to incorporate some of that material in my Intro Ethics course, which focuses heavily on our interactions with technology. As it turns out, Black Mirror‘s “Be Right Back” (S2E1) was exactly the pedagogical tool I had been looking for.

“Be Right Back” is now one of two Black Mirror episodes that I regularly teach. (The other is S2E2 “White Bear.”) Not only is it a fascinating episode in its own right, but it is really ideal for addressing a number of perennial philosophical themes, including identity, personhood, memory, affects, death, and immortality, just to name a few.  One of the greatest things about introducing Black Mirror into the classroom, I have found, is that it neatly packs a tremendous amount of material into a perfect one-hour bite.

The episode begins with a young newlywed couple, Martha (Hayley Atwell) and Ash (Domhnall Gleeson), who are in the process of moving into and renovating Ash’s boyhood home and making it their own. Their relationship is cute and playfully sweet, as much young love is, and the only crack we can see in it is that Ash may have a tendency to be a bit too attentive to what is happening on his smartphone and not attentive enough to what is happening right around him. Still, we are meant to understand them as happy.

Tragedy strikes the following morning, when Ash is involved in a fatal accident while returning their moving van. (Although we don’t know for sure, we are led to believe that Ash’s accident may have been caused by the oft-deadly smartphone/driver combination.) At Ash’s funeral, one of Martha’s friends suggests that she sign up for a “service” that would help with her grieving process. The “service” is a program that collects all of the deceased’s social media activity, everything they have said or posted online, and creates an AI chatbot that mimics them. After some initial resistance, Martha signs up for the program, feeds it all of Ash’s public (and, eventually, his private) information, and begins to interact with “Ash,” first in online chat conversations, and later in phone conversations. This proves to be especially poignant as, by this point, we have learned that Martha is pregnant with her dead husband’s first child.

Oh but wait, it gets creepier.

Martha eventually learns that there is “another level” to the program, one that involves creating an uncannily humanlike, AI-enabled android of the deceased. Martha opts in and, a few days later, “Ash” shows up in a box at Martha’s door, ready to be “activated.” (The activation process is very weird, in part because it seems so very normal, and it involves Martha putting the body in a bathtub and letting it soak in electrolytes until it “awakes.”) Martha’s interactions with the android– I’ll call him “Ash2” for clarity– are as awkward as one might expect at first, but the two eventually settle into domestic life together. And I mean all parts of “domestic life,” including the most intimate activities.

Ash2 is like Ash in almost every way. In some ways, he is a little better. Because Ash never posted online about his intimate sexual activities, Ash2 is forced to scan the internet for pointers on what to do in bed. As a result, and much to Martha’s surprise, Ash2 is an amazing lover. (This is, of course, no surprise at all to anyone who knows the internet.) We slowly realize that the data-set animating Ash2 is limited in other ways as well. It includes only information about the version of “Ash” that Ash shared in public, and so Ash2 isn’t algorithmically-equipped to be angry or unpredictable or to manage novel situations as Ash would. In one particularly telling moment, Martha remarks “you look like him on a good day,” to which Ash2 replies, “we tend to keep flattering photos of ourselves. I guess I was no different.”

Eventually, the minor differences become major stumbling blocks for Martha, who begins to see Ash2 as “just a few ripples” of the original. She takes him to the top of a nearby “lover’s leap” and orders him to jump off of the precipice. Ash2 doesn’t understand what is happening– he protests that his data set shows no history of Ash having had thoughts suicide or self-harm– but Martha is insistent. “There’s no history to you,” Martha tells Ash 2. And then, in a devastatingly damning description of our modern online life, and one of the most brilliant lines of dialogue from all four Black Mirror seasons, Martha says:

“You’re just a performance of stuff he performed without thinking.”

SPOILER: Ash2 does not jump. He is artificially intelligent, after all, and so he very quickly processes the fact that he is supposed to be afraid of the imminence of his own death. (As I joked to my friend during this scene, “Did you catch it? He just read Being and Time in that one millisecond.). Martha, a mere human, cannot handle Ash2’s performance of mortal fear, so she takes him back home, stuffs him in the attic with all the other old photos and memorabilia of happier times, and gets on with her life.

When I teach “Be Right Back,” I introduce it with a lecture on Mori’s Uncanny Valley hypothesis and the current state of android development, paired with some fairly standard readings about identity and memory (by Locke, Hume, Descartes, the usual suspects). After students watch the film, I try to center our discussion around a few essential questions:


(1) Are Ash and Ash2 the same person?
(2) Is Ash2 a “person”?
(3) What is the difference, if any, between the “you” that you perform online and the “you” that you perform IRL?

Almost fifty years ago, in 1970, Masahiro Mori speculated that our interactions with humanlike robots could be graphed to reflect the following phenomenon: humans’ affective responses (familiarity, empathy, affinity) tend to increase positively as a robot’s design more closely approximates human form and behavior, but only up to a certain degree of similitude, after which there is a sharp negative turn and humans’ responses plummet into a “valley” of uncanny aversion. There are some problems with Mori’s hypothesis, which I’ve written about here, but his basic observation– that human beings find the too-close approximation of humanlike appearance to be creepy and repellent– has been more or less confirmed by a number of scientific studies over the last half-century. And it is confirmed in my classroom each semester, as students’ overwhelming response to Ash2 is that he is “creepy.”

Mori and others have observed that our attention to differences between things that appear similar tends to sharpen the more alike those things are. We seem to have a hard-wired cognitive aversion to being deceived, and a corresponding cognitive attachment to being able to distinguish the “like” from the “is.” This is why, for example, we find simple games like “Spot the Difference” (above) so frustrating when we are unable to figure them out. When we know two things are not identical, we are especially keen to identify and maintain their difference, even when the differences are immaterial.

No one know why our brains operate like this, really. It does seem like a utile evolutionary development, as it would no doubt be difficult to survive, much less thrive, without the capacity to distinguish between the real and the apparent, between true and false, between fact and fiction. (See: politics in the current United States) But, when we move beyond situations of actual survival– is that an oasis or is it a mirage?— and think about the ways we have extended this habit of discernment to maintaining borders between things whose “differences” are themselves a bit of a mirage (like racial or gender categories, as I argued here), then we find ourselves confronting a whole other set of metaphysical questions.

This marks the critical difference between Question 1 and Question 2 (above) that I pose in the classroom. Almost all of my students reject the possibility that Ash and Ash 2 are identical. On the whole, they are able to articulate why that is so with recourse to centuries-old philosophical arguments about identity and diversity. Almost all of their answers involve some variation on the rudimentary distinction “Ash is a person; Ash2 is a robot.” However, it is becomes exceedingly difficult for them to articulate why Ash2 is not also “person,” even if not the same person as Ash.

What I want students to think seriously about is how they define “the human” or “the person,” as well as what investments they have in policing the boundaries of those categories. Moreover, I want them to reckon seriously with just how successful contemporary technologies have become at very closely approximating the human and the person– so much so, in fact, that I could readily provide examples of humnalike technologies (artificial intelligences, chatbots, androids, CGI) that would present students with likenesses that would make it practically impossible for them to “spot the difference,” even if they knew that what they were looking at was a likeness.

Case in point: check out Hiroshi Ishiguro (center) with his Geminoid Project doppelgänger,  alongside his wife (left) and Professor Henrik Scharfe (right) with their android doubles.

When it comes to questions of the (human) “self,” I describe my philosophical position as Sartrean existentialist with a heavy helping of Butlerian performativity. That is, I think that what we’re referring to when we say “I” is not a soul, not an essence, not an immutable spirit or nature, but rather a set of embodied decisions and actions, which are themselves no more and no less than performances of how we imagine that thing we are performing should appear to others. Ideally, we wouldn’t enact these performances merely out of habit or because we are coerced to do so, but rather freely, intentionally, and reflectively. Nevertheless, as Martha said to Ash2, rightly in my view, when it comes to our performances of our digital selves, we often “perform” who we “are” without thinking.

I’m still working out my own commitments to the border between the human and the humanlike. Increasingly, I find that border almost impossible to maintain. The groundbreaking work being done with androids by people like Hiroshi Ishiguro (creator of the Geminoid Project, pictured above), who believes that something like sonzai-kan (roughly translated from Japanese as “human presence”) can be simulated technologically, has given me pause to reconsider all of the reasons that I previously thought human-ness to be unique. And, as I’ve written about several times before on this blog, I find Ray Kurzweil‘s speculations about the technological singularity to be compelling, not as science fiction, but as a matter of absolutely pressing philosophical consideration. Above all, though, I’ve found myself more and more compelled over the last several years to insist that not only my students, but I also, stop and think seriously about our digital lives, especially the largely-unsustainable differences we want to maintain between our “digital selves” and our “IRL selves.”

I’m not sure that I’m ready to bet the farm on this yet, but I think we may have to imagine Ash2 happy.

Random Episode Notes:

  • If you’re a professor/teacher and are considering including “Be Right Back” in your course, you should know in advance that there is an extended and somewhat graphic sex scene. That scene is essential to the story. If it’s any consolation, I teach at a Catholic University and my Dean observed the class in which we discussed “Be Right Back” and he expressed no reservations whatsoever about my requiring students to watch it. 
  • The scene in which Ash2, after being banished from the house by Martha and standing in the front yard all night because he can only travel so far from his “activation point,” asks if he can come back inside because he’s “feeling a but ornamental” is one of the most uncannily-intelligent things he says in the whole episode imho.
  • In my experience, most students (and most people) are simply unaware of the fact that android development is almost already at the “Be Right Back” stage. I think it’s absolutely essential, if you’re going to teach this episode, that you introduce your students to Hiroshi Ishiguro’s work. And even if you’re not a teacher, you should check out the Geminoid Project (linked above) after you watch the episode. 
  • Pedagogical ProTip: I have found the following to be a super-effective when talking about humanlike androids: At some (preferably mundane) point in the course of your lecture or discussion, just ask students “how would you feel if the “real” Dr J. walked into the classroom right now and asked you what you thought of her doppelgänger’s performance so far?” (Obvs, substitute your own name.) This is really effective if you can convincingly behave as if there is a real possibility that “real you” is about to walk in the room.
  • Students like to point out that Ash2’s emotional response to being asked to kill himself is “fake.” I think this is a really great opportunity to talk about so-called “authentic” human affects. How do I know you’re sad when you’re sad? Because you “act” sad. You perform the sorts of behaviors that we are taught since childhood indicate “sadness.” Why is Ash2’s performance any different? How is it “fake”?
  •  The actor who plays Ash/Ash2 is Domnhall Gleeson, who also played one of the protagonists in the brilliant 2014 AI film Ex Machina. Must be something about this guy that inspires robot lovers.
  • Don’t pass up the chance to make note of the fact that Ash is, in many ways, far more “robotic” than Ash2. The scene where Martha asks Ash what he wants for dinner, and Ash is too obsessed with posting something online to pay attention to the conversation, is a perfect example.
  • Obvs I took the title of this post from the Adele song “Someone Like You,” the lyrics of which I have, over time, come to realize has an uncanny relation to “Be Right Back.” Sometimes we last in love and sometimes it hurts instead. I didn’t anticipate this the first time I taught “Be Right Back,” but I’ve since learned that students are really interested in talking about what constitutes “healthy” or “unhealthy” grieving processes. That still isn’t all that interesting to me, but whatevs, here’s the Adele song if you decide you want to make use of it. 


Leave a Reply

Your email address will not be published. Required fields are marked *