Transhumanists insist that we are quickly approaching the moment at which technology will become an unstoppable and self-directing power that will usher in the “post-human” era. To get us from here to there requires the invention of “artificial intelligence” (AI), computers and/or robots that become “conscious” and self-programming, independent of human control. Actually, these advocates would say “who” becomes conscious: Transhumanists believe that AI contraptions would become self-aware and thus deserve human rights.
Don’t laugh. “Machine rights” have advocates in high places. A recent article by Hutan Ashrafian, published in the highly respected science journal Nature, is one example. Ashrafian wants a protective charter drafted—“equivalent to that of the United Nations’ Universal Declaration of Human Rights”—that would shield AI machines from being harmed by us and each other:
Humankind is arriving at the horizon of the birth of a new intelligent race. Whether or not this intelligence is ‘artificial’ does not detract from the issue that the new digital populace will deserve moral dignity and rights, and a new law to protect them.
My response to this is simple. Machines have no dignity and no rights, which properly belong exclusively to the human realm. Moreover, AI contraptions would only mimic sentience. As inanimate objects, AI contrivances could no more be “harmed” (as distinguished from damaged) than a toaster. Even if the machines were built with human cells or DNA, they would never be integrated biological beings.
Machine rights advocacy is subversively reductionist. It forthrightly diminishes the meaning and unique value of human life. For example, Princeton utilitarian bioethicist Peter Singer and Polish researcher Agata Sagan insist that AI robots could one day be at risk of becoming the victims of human oppression:
At present, robots are mere items of property. But what if they become sufficiently complex to have feelings? After all, isn't the human brain just a very complex machine? . . . The development of a conscious robot that (who?) was not widely perceived as a member of our moral community could therefore lead to mistreatment on a large scale.
The approach here rests in a leveling maneuver. If the brain is a really machine, then any thinking machine deserves the same rights that a working brain possesses. But the human brain—and, more importantly, the mind—is much more than a complex organic computer. As the Stanford physician and bioethicist William Hurlbut told me, “Human consciousness is not mere computation. It is grounded in our full embodiment and intimately engaged with the neural apparatus associated with feeling and action.” In other words, human thought arises from a complex interaction of reason, emotion, abstract analysis, experience, memories, education, unconscious motivation, body chemistry, and so on. That can never be true of AI robots. Even if an AI machine were to attain unlimited processing capacities, it wouldn’t be sentient, just hyper-calculating.
More to the point, we are moral beings by nature. AI computers would have no such inherent characteristics. Humans have free will. Beneath the silicon and the 0s and 1s of their software programming, AI robots are unthinking slaves to their algorithms. And then there is the spiritual component: The existence of a human soul may be a contentious issue these days, but regardless of whether we each have one, a manufactured machine surely would not.
None of that matters to Singer and Sagan. They believe a time will come when robots should be considered equal persons:
But if the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious. At that point, the movement for robot rights would begin.
It already has, but let’s hope it goes nowhere. There is a proper hierarchy of moral worth, and humans are at the apex. Even enemies of human exceptionalism understand this, which is why they always are looking for analogous capacities among lesser entities—whether animals or AI computers/robots—as a means to bootstrap them into a position of moral equality with us.
The primary consequence of creating such a false moral equivalence—perhaps, its purpose—is the diminishment of the fundamental importance of being human. It isn’t a coincidence that some of the very intellectuals waxing eloquent about personalizing machines also advocate depersonalizing some human beings. If we ever accept that machines (or chimpanzees or Old Faithful ) “are people too,” our value would cease to be seen as intrinsic or unique. Human entities would be just like other entities that possess a touch of processing capacity. This is not an act of respect and ennobling lesser beings. It is disrespect of human beings.
Wesley J. Smith is a senior fellow at the Discovery Institute’s Center on Human Exceptionalism and a consultant to the Patient’s Rights Council.
Become a fan of First Things on Facebook, subscribe to First Things via RSS, and follow First Things on Twitter.