With the release of Blade Runner 2049, the long-awaited sequel to the 1982 classic, philosophers and bioethicists are buzzing about when non-human beings should be granted “rights.” Lorraine Boissoneault offers an interesting take in Smithsonian, exploring whether the fictional “replicants” that inhabit the dystopian world of both Blade Runner films, as well as AI computers that may soon be designed, should be considered “persons” entitled to legal rights. From “Are Blade Runner’s Replicants ‘Human’?”:
Blade Runner is only a movie and humans still haven’t managed to create replicants. But we’ve made plenty of advances in artificial intelligence, from self-driving cars learning to adapt to human error to neural networks that argue with each other to get smarter. That’s why, for [Yale philosopher Susan] Schneider, the questions posed by the film about the nature of humanity and how we might treat androids have important real-world implications.
“One of the things I’ve been doing is thinking about whether it will ever feel like anything to be an AI. Will there ever be a Rachael [the most advanced replicant]?” says Schneider, who uses Blade Runner in her class on philosophy in science fictions. This year, Schneider published a paper on the test she developed with astrophysicist Edwin Turner to discover whether a mechanical being might actually be conscious.
The test, called the AI Consciousness Test, determines whether a machine has a sense of “self” and exhibits “behavioral indicators of consciousness.”
Another way to decide whether a machine should be granted rights or personhood is to see whether it exhibits emotions:
For Eric Schwitzgebel, professor of philosophy at University of California at Riverside, the conclusion is even more dramatic. “If we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings,” he writes in Aeon. “We will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state.”
Playing God has its burdens, I guess.
Okay, let’s play. How can we determine whether an entity has some level of intrinsic moral value? It seems to me that we should devise an “entry level” test, an unquestionably objective measurement capable of determining the essential nature of the being or thing under consideration. That would be an impossible feat if we based it on subjective criteria such as those quoted above. As we design increasingly “human-like” machines (including, the tabloids delight in reporting, sex dolls), we could, if we were to apply those criteria, grant moral value to objects that exist only as projections of our own anthropomorphizing yearning.
So, what test, then? I suggest that the first—but not last—hurdle to being accorded any moral worth should be whether the entity measured is alive. Why should “life” be the first criterion? Except perhaps at the level of viruses, we can determine by objective and scientific means whether the subject under consideration is a viable presence rather than a mere thing.
Let’s compare elephants to AI machines. Elephants clearly pass the alive test. They are integrated, self-regulating organisms that exhibit behavior consistent with their species. From that point, we could engage in other inquiries for determining the extent of their moral value. But the most sophisticated AI machine, being inanimate, would not get as far as a blade of grass, since it doesn’t inhabit a truly existential state. We could not “wrong” such a machine. We could not “hurt,” “wound,” “torture,” or “kill” it; we could only “damage,” “vandalize,” “wreck,” or “destroy” it.
Elephants have a shared inherent nature—that is, they all exhibit certain characteristics unless too immature or prevented by illness or injury. In contrast, AI robots wouldn’t have any intrinsic “nature,” only individually designed programs. Even if a robot were made capable of programming itself into greater and more complex computational capacities, that would not make it truly sentient, just very sophisticated.
Getting back to the Blade Runner question: Replicants are more akin to Brave New World’s genetically engineered castes of cloned humans than, say, to Star Trek’s AI robot character, Data. True, they are manufactured and “born” adult through some genetic engineering process (not fully explained) and implanted with false memories—but there is no question they are alive. Hence, under the Smith Protocol (let’s call it), replicants would pass the first test required for possessing moral value. Given their capacities and attributes determined in subsequent considerations, I would say they should indeed be considered truly human—as should human clones, if any are ever born—and thus accorded moral worth equal to our own.
In contrast, AI machines have no such inherent value. They would undoubtedly be highly useful apparatuses—potentially so sophisticated that they might even appear to feel, think, and, like Data, have charming “personalities.” But because they are inanimate, because they fail the Smith Protocol, they would not—except as someone’s property—have any greater moral claim to our respect or ethical consideration than a broken toaster.
Award-winning author Wesley J. Smith is a senior fellow at the Discovery Institute’s Center on Human Exceptionalism.
Become a fan of First Things on Facebook, subscribe to First Things via RSS, and follow First Things on Twitter.