Transhumanists and other futurists insist that the future will bring us robots who have become “conscious” beings, and that when they do, “sentient” machines should receive what we now call human rights. This is all fanciful, of course. Robots of the kind envisioned would only be computers with very sophisticated software. In that sense, they would be no more entitled to rights—and would be no more capable of being harmed, as distinguished from damaged—than the toaster.
A major component of this discussion is the desire to deconstruct human exceptionalism, and thus, it warrants our notice. Take, for example, this opinion from Peter Singer and a Polish researcher named Agata Sagan published in The Guardian. From the column “When Robots Have Feelings:”
For the moment, a more realistic concern is not that robots will harm us, but that we will harm them. At present, robots are mere items of property. But what if they become sufficiently complex to have feelings? After all, isn’t the human brain just a very complex machine? If machines can and do become conscious, will we take their feelings into account? The history of our relations with the only nonhuman sentient beings we have encountered so far animals gives no ground for confidence that we would recognise sentient robots as beings with moral standing and interests that deserve consideration.
The human brain as merely a “complex machine” epitomizes the reductionist thinking so prevalent among the robots-are-people -too crowd. But we are much more than mere complex machines. We are alive, for example. Robots would not be. That should matter, but that relevant fact is ignored because it would serve to support human exceptionalism, which Singer and his cohorts seek to dismantle. Moreover, we do take notice of the intrinsic value of animals and their capacity—unlike robots—to feel pain. That is why we have animal welfare laws—which is consistent with our duties as moral beings, a crucial component of human exceptionalism.
Human behavior arises from a complex interaction of rationality, emotions, abstract thought, experience, education, etc.. That would never be true of robots. More to the point, we are moral beings by nature. Robots wouldn’t have a “nature” and any morality they “exhibited” would be programmed morality. We have free will, robots wouldn’t in the same sense. If a robot could program itself into greater and greater data processing capacities, that doesn’t make it truly sentient, just sophisticated.
But Singer/Sagan thinks the time will come when robots should be given the benefit of the doubt toward personhood:
But if the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious. At that point, the movement for robot rights would begin.
Isn’t it interesting that the anti human exceptionalists always seem to look for ways in which animals, robots—and whatever other category they wish to elevate in moral value to human equivalence—mimic distinctly human capacities and activity—proving, if nothing else, that humans are the lodestar for moral worth.
But an impression or mimicking of human capacities isn’t the real McCoy. There is a hierarchy of moral worth and humans are the exceptional species. Lose that insight and we not only open the door to harming vulnerable human beings, but we destroy the necessary philosophical foundation supporting universal human rights.
Time is short, so I’ll be direct: FIRST THINGS needs you. And we need you by December 31 at 11:59 p.m., when the clock will strike zero. Give now at supportfirstthings.com.
First Things does not hesitate to call out what is bad. Today, there is much to call out. Yet our editors, authors, and readers like you share a greater purpose. And we are guided by a deeper, more enduring hope.
Your gift of $50, $100, or even $250 or more will bring this message of hope to many more people in the new year.
Make your gift now at supportfirstthings.com.
First Things needs you. I’m confident you’ll answer the call.