Dawn showed faintly in the east. Among the ruins, one wall stood alone. Within the wall, a last voice said, over and over again and again, even as the sun rose to shine upon the heaped rubble and steam:
“Today is August 5, 2026, today is August 5, 2026, today is…”
-Ray Bradbury, There Will Come Soft Rains
A few years ago I volunteered for a psychology experiment. I was shown a clear plastic case filled with gears and levers arranged to form what looked kind of like a face, although that just might have been pareidolia (check out Ann Koplow’s definition of that word). The young woman administering the experiment told me the case was a robot named Marvin and I thought, hey, the paranoid android, does he have diodes causing pain in his left side? But I wasn’t the one asking questions. Instead the young woman asked me a series of questions about Marvin. Does Marvin have feelings? Does Marvin think like we do? Does Marvin have rights? As she went down the list I ticked off “no”, feeling a little bad about it, but, hey, it was a machine, not a person.
When the experiment was done the young woman explained to me that she was studying how people respond to machines. She had a different “robot” without a face and with a more technical name. She told me most people responded negatively to the other robot but more positively to Marvin, and I’d just completely blown the results. Maybe I would have felt differently if that uncanny valley had been narrower, but I doubt it.
The odd thing is I’ve really been into science fiction, and especially robots, my whole life. The first Halloween after Star Wars came out I went as C-3PO and the first time I saw Forbidden Planet on a Saturday afternoon I thought Robbie The Robot was the hero. I still kind of think that and sometimes when I offer someone a drink I’ll add, “Would sixty gallons be sufficient?” and no one ever gets it, but that’s another story. And the ethics of artificial life, and especially artificial intelligence, is something science fiction has grappled with since, well, about as long as there’s been science fiction. “Robot” comes from a Czech word meaning “slave” and entered science fiction in a 1921 play by Karel Čapek. The term android is a compound of ancient Greek words that mean “man-like” and has been used to mean something resembling a person since at least the early 18th century.
It’s still a big question. The series Humans and the 2004 reboot of Battlestar Galactica are both built on the question of what happens when machines become self-aware, Star Trek: The Next Generation used Commander Data and Star Trek: Voyager used the holographic doctor to grapple with the rights and responsibilities of self-aware machines, and, back in the Star Wars universe, even though Obi Wan says, “If droids could think there would be no need for humans at all,” it seems pretty clear that the droids can think. They’re even programmed with personalities–or is that something that just happens when their ability to process information reaches a certain level? And while the replicants in Blade Runner look biological–almost completely indistinguishable from humans–they’re still machines, but what happens when you program a machine with an instinct for self-preservation?
And let’s not forget one of film’s most famous thinking machines: HAL 9000. As Dave says, “He acts like he has genuine emotions. Of course he’s programmed that way to make it easier for us to talk to him. Whether or not he has real feelings is something I don’t think anyone can truthfully answer.”
2001 does try to answer that question, though. In the end HAL’s voice runs down like a record player losing power, a mere machine. In 2010, though, we learn that HAL goes on a killing spree because it–or he–was told to lie, causing an internal conflict. The machine has a mental breakdown.
For all that science fiction has wrestled with the question there still seem to be no answers, but one thing is clear: the more like us machines become the more they’ll tell us about who–or what–we are.