We talked to an AI ethics expert about ‘Black Mirror’ Season 4
The best Black Mirror episodes don’t just leave you wondering whether these futures could happen. They force you to consider what it would mean if they did.
For John C. Havens, these aren’t just idle TV musings. He’s the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, an organization that aims to develop standards around the design and development of artificial intelligence.
In other words, he and his team are the ones trying to keep us from hurtling, unprepared and unaware, into a Black Mirror dystopia. He also happens to be a big Black Mirror fan, which is why we called him up to ask him all the questions that kept us up at night after we finished Season 4.
In general, Havens said, he thinks the show does a “great job” of exploring the ethical and legal issues surrounding cutting-edge AI technology. Like pretty much everyone else who’s seen the latest season, he was especially taken with “U.S.S. Callister,” which deals with a woman whose consciousness, unbeknownst to her, is copied into the cloud.
“There’s a kind of irony in saying computation is all there is to a person.”
As far as the copy is concerned, she is the real Nanette. It takes her some time to come to grips with the notion that she’s a digital duplicate, and that the original is still out there in the physical world. So how are we to think of the second Nanette? Is she a continuation of the original Nanette, or a separate person entirely?
As Havens sees it, the answer isn’t so simple. “There’s a kind of irony in saying computation is all there is to a person,” he said – pointing out that many religions, for instance, “believe there’s something outside of the self that informs who they are.”
He also cautioned against “the assumption that consciousness will be uploaded in the sense of like, I’m John, we figure out how to copy my brain, and it’s A to B going from carbon to silicon. There’s about 48 major philosophical, not just empirical and scientific, but philosophical, and faith-based assumptions in that.”
That said, he agreed that it’s “critical” to grapple with the ethics around the concept. To that end, IEEE has been developing a standard called P7006, which he describes as “an algorithmic AI agent for an individual.”
Essentially, the idea is to give each human being an encrypted cloud that stores and protects all their personal data – all the stuff that gets tracked, copied, and shared every time you log on, and in the aggregate form your online identity.
“When we tweak Facebook and all this stuff right now – A, it happens, and we kind of forget about it. B, other people own and control the system around our identity, in terms of how our data is shared,” he said. “Versus when you have a data vault that’s tied to your name and your identity.”
If then, say, a digital copy of you tried to blackmail you into committing a crime by threatening to release your compromising selfies, P7006 would serve as a sort of “recourse” to help you get your life back on track.
The second episode of the season focuses on a very different type of intel. An overprotective mother fits her young daughter with a chip that will allow her to keep tabs on her child at all times – tracking her movements, monitoring her health, filtering everything she sees and hears, and more.
Needless to say, this turns out to be a very bad idea – but one Havens says he can kind of understand. “I think one thing that they do very well on the show is it’s not just, oh this parent is doing something wrong or oh, the child is doing something wrong,” he said. “For me, you have to have both characters be sympathetic or the ethical issues don’t make as much sense. But as a parent, that really rang true.”
According to Havens, that kind of surveillance technology isn’t so improbable (well, aside from the ability to hack the optic nerve). After all, we already have ID chips for our pets, and trackers and sensors that can find people or tell what our heart rates are. What he finds tough to swallow about the Arkangel technology, as presented in the show, is its permanence.
“I think that’s pretty farfetched because as a parent, any technology that your kid can take off is going to sell a lot better than something that’s permanent,” he said. “Like, that was part of the show that was a little unrealistic.” Something like augmented reality contact lenses, he speculated, would be “a lot more feasible.”
His issue with “Crocodile,” on the other hand, has less to do with the market and more to do with the law. The episode centers around a machine that allows an investigator to watch another person’s memories – and Havens is not buying it, explaining it seemed unrealistic that someone would “legally have the right to go into someone’s house or whatever,” he said.
That raises some obviously concerns about privacy – namely, how that concept would change in a world where other people can access your most intimate thoughts. However, Havens pointed out, the issue is much larger than that.
It’s about who owns your data, who benefits from it, and what it means to consent to sharing it.
“It’s really about data control and parity,” he said. That is, it’s about who owns your data, who benefits from your data, and what it means to consent to sharing it.
“There has to be an equal, long-term understanding that the data that we’re giving away is not just about privacy in the moment,” he said. “It’s a long-term usage of a form of our intellectual property that we should not be able to sign away in the environment that we’re currently in – and certainly not the one we’re going to in terms of virtual reality.”
In other words, it’s not only about whether or not Mia agrees to let Shazia look into her brain. It’s about what Shazia is allowed to do with the information she finds in there, and what exactly Mia has given up by agreeing to participate.
Or actually, make that what Shazia would be allowed to do with that data. In the episode, Mia murders Shazia and her whole family rather than risk having her secrets discovered. Guess Mia felt pretty uneasy about giving her consent after all.
Hang the DJ
Consent also factors into “Hang the DJ,” which follows a man and a woman who keep getting pulled apart by a strangely oppressive and supposedly foolproof dating system, despite their obvious chemistry. The twist is that the couple was in a simulation all along: their entire relationship was a compatibility test administered by an app in the real world.
As Havens noted, that’s not so far removed from reality. “It already happens now, like in Second Life or multiplayer games or even certain dating services, where you might get to interact as a virtual character before you go on a date with the person. That type of simulation, it’s already happening now.”
What’s different in “Hang the DJ” is that Amy and Frank don’t have the option of logging off when they get tired of the system, since they don’t know they’re in a system to begin with. Havens explained that a standard like P7006 could help protect people in situations like that, by giving them a sort of “safeword” to extricate themselves and the ability to set their own terms and conditions before entering the system.
Which all sounds very well and good, but … hang on. How do we know we aren’t in a simulated reality already? What if Elon Musk is right?
Havens acknowledged that there are complicated discussions around that topic right now, but made clear which side he fell on. “No, I don’t think we’re in a simulation right now,” he said.
Then again, maybe we’re worrying about the wrong things. Maybe we’re thinking too conceptually. Maybe we should be looking at good old-fashioned robots. And not the sentient kind.
“Metalhead,” the penultimate episode of the season, follows a human survivor as she’s chased through a bleak and barren dystopia by a robot “dog.” While there’s no explicit explanation as to how the world got this way, there are hints that the “dogs” wiped out most organic life.
So, then, what’s the bigger existential threat? Artificial intelligence, or killing machines that are way too good at their jobs?
“Well, you know, mindless killing machines in general, let’s avoid those, because that’s unfortunate,” said Huvens, laughing. “I think that’s not good.”
The problems posed by AI are a little more complicated, and the threat a little more conceptual. “I have a book called Heartificial Intelligence, where the big question I ask is, ‘how will machines know what we value if we don’t know ourselves?'” he said.
“The big question I ask is, ‘How will machines know what we value if we don’t know ourselves?'”
That question becomes increasingly urgent as we become increasingly reliant on ever-more-sophisticated technology to learn our skills and make our decisions for us. The danger there is not advancement, but unexamined advancement.
“In the case of AI, where a lot of the tools are about effective computing or motions for the algorithm making decisions for us, the technology is moving faster than individuals, in terms of even knowing how they can continue to make those choices. And that’s where aspects of societal decisions about what to do should be valued.”
That includes reexamining the assumption that what’s new is necessarily better. Havens and IEEE want technology to focus on human well-being – which may mean that sometimes, “you can take a step back and say, just because we can build something doesn’t mean you have to.” You know, like killer robot dogs: Yes, we may be able to make them someday. But should we?
Failing to ask that question is exactly how we get into situations like the ones depicted in “Black Museum,” which ties together three shorter stories within a larger framing device.
In one, a doctor starts using, and then becomes addicted to, a device that allows him to feel his patients’ pain. As far as Black Mirror goes, Havens says, it’s not “overly farfetched” – virtual reality headsets, for instance, can already make users feel like they’re experiencing things they aren’t.
The other ideas are more out there. In the second story, a comatose woman’s consciousness is uploaded into her husband’s brain, and then into a stuffed animal. Which means that at one point, one human body contains two consciousnesses; at another, a human consciousness exists without a human body.
In the third, a prisoner sells his post-death consciousness. After his execution, his consciousness lives on as a hologram under the ownership of a sinister museum proprietor. He’s not just a human consciousness without a human body – he’s a human consciousness without tangible physical form.
It’s easy to see how such technology would complicate our current ideas about personhood. And although Havens estimates we’re at least “30, 40, 50 years” out from feasibility, he emphasized that right now is the time to discuss what we’ll do once we have those options.
“Just because someone has the right to do something, doesn’t mean that they should be allowed.”
“If we keep rushing with this stuff, then things just start springing up,” he said. “We have to ask a lot of questions, and in that way, you, me, anybody, when we realize the technology or whatever might be available, we have a set of agreed-upon principles that aren’t only driven by, hey, this new technology is available, you can do this right now, do it.”
Havens stressed that while he doesn’t want to encroach on personal choice, he believes there should be a “set of policies and standards that can allow a person an individual decision, but there’s a uniformity and kind of a legality about it so that it’s not just random.”
Then there’s the question of how technology might intersect with our fundamental human rights – which isn’t actually all that different from how we consider certain decisions now.
“For instance, if you’re selling yourself into slavery, even though some would say, well, it’s up to you if you want to sell your body to slavery, that doesn’t necessarily mean from a human rights standpoint that it wouldn’t be a violation,” he said. “Just because someone has the right to do something, doesn’t mean that legally they are allowed, or that they should be allowed.”
So, how doomed are we?
With few exceptions, Black Mirror tends toward pessimism for our high-tech future. But where the show focuses on gloom and doom, Havens sees glimmers of hope. He just wishes Black Mirror would reflect those as well.
“I wish they’d cover the data stuff more often, and I wish they had more positive pictures of what the world could look like if your data was something that you didn’t even have to own, but that you could access and control,” he said. It doesn’t even have to be totally speculative – he’d like to see, for example, some of the “really good work in AI policy” being done at the U.K. House of Lords right now.
The key to avoiding a Black Mirror dystopia lies not in rejecting technology, but in embracing humanity.
And while much has been made of Black Mirror‘s eerily accurate predictions (never mind that the show isn’t even really trying to guess the future), it’s still better treated as a cautionary tale than a prophecy. Havens warns against assuming that Black Mirror tech – or all the stuff that comes with it – is inevitable.
“It’s one thing to be excited about the future. It’s one thing to say how this technology will help us. But any time there’s a hint of [the idea that] where we are right now is not enough, that’s simply not true. That’s just not true. Is there evil in the world? Sure. Is that going to stop with robots? Of course not. Who builds the robots? People.”
The key to avoiding a Black Mirror dystopia, then, lies not in rejecting technology, but in embracing humanity.
“The starting point for all these different questions that we’re talking about today is this sense of, well, someone’s going to fix these different parts of humanity,” noted Havens. Instead, he believes, we should be turning our focus inward. “Introspection is hard work. But it’s also the way that civilization began.”
In part, that means embracing “who we are right now, without always having to look towards a future where we assume that technological things will save us,” he said. “We have the power within us today to save ourselves.”
Now that’s a notion so empowering, we almost feel emotionally ready to binge-watch Black Mirror again.