by Eric Beebe
Scrolling through Twitter, in bed late on a weekend in October, I found the first notice I’d seen of what some call the future and others have called the end of humanity. Sophia, an android developed by Hanson Robotics, had spoken at a summit as she was granted citizenship by Saudi Arabia. I turned the volume on my phone way down and tried to listen without waking my girlfriend next to me. To grant a robot citizenship was an unprecedented leap, and I wanted to see how much humanity it had taken from a machine to earn such a distinction. Still, I doubted whether the footage would hold my attention for longer than a sound bite. I’d seen enough “marvels” of the future to warrant skepticism, ones that sounded more Microsoft Sam than any semblance of a human. But watching the footage, I was surprised. Yes, there was the element of the uncanny to Sophia’s voice and appearance, but, more than any judgment on where her adaptive artificial intelligence stood, one question took root in my mind: how human will she get?
It’s no secret that plots and stories grappling with this kind of advancement in technology have been influencing our thoughts on the matter for decades now. From early science fiction novels to the flashy blockbusters that proceeded, the question of whether AI and androids are things to be feared or embraced has been more prominent in society than some might recognize. Almost always, these tales include a warning, a hint that humanity should be cautious in advancing technology faster than we can grapple with the repercussions, philosophically or otherwise. In some of the most well-known incarnations , like The Terminator and The Matrix franchises, the warning takes a xenophobic form, assuring viewers that if machines become too much like us—that is, too smart—they must inevitably discover that humans are inferior, unnecessary, and expendable. These plots tap into a deep-seated suspicion of the unfamiliar, which can be twisted all too easily into hysteria. It’s no surprise that common sentiments surrounding the latest developments in AI tend to include at least some notion of, “But how long until it kills us?”
On the opposing end, however, we see cautionary messages about advancement that point the finger at humans and only humans. In plots like those of the video game Fallout 4 or HBO’s TV reboot of Westworld, AI is portrayed as having its capacity for a soul underestimated far more than its threat to the human race. In these, violence on the part of robots is a reaction to human aggression, aggression stemming from an inability to appreciate and respect the sentience of what we’ve created. Whether the androids face enslavement, genocide, or other horrors in these stories, the common factor is an absence of humanity in humans, not AI.
How much of our anxieties about AI are a result of devouring pop culture’s endlessly recycled tropes on technology as our undoing? How many people would fear new creations like Sophia without consuming stories about the evils of Skynet or Agent Smith? Would I have felt compelled to refer to Sophia as “she” rather than “it” if I hadn’t spent hours in a video game aiding the efforts of synthetic humanoids escaping enslavement to live as people? As is expected of art, these stories help to shape our worldview. One can imagine a day when androids are ever-present in our lives, and what then? We may not need to have all the answers now, but if we’re to prepare for when the robots come, we need to take a deeper look at the narratives around us and do what we expect AI to do most: think.
Eric Beebe is a current degree candidate at The Mountainview Low Residency MFA in Fiction and Nonfiction.