Some of our discussions here at SHS about human exceptionalism have considered the prospect for Artificial Intelligence (AI), and engaged the advocacy by some that such intelligent computers or robots—meaning those that had attained true consciousness—be declared persons and accorded what today are called human rights. I have expressed profound doubt that any machine would ever be actually intelligent in this sense. This position finds articulate support in this article by Professor David Gelernter in Technology Review. It’s a very long article, too long to fully consider here, but well worth the read.
Gelernter believes that “conscious software” is “a near impossibility,” in other words, that scientists won’t ever create true AI because consciousness involves more than just rational thought, but also emotions, sensations, etc., which a machine could almost surely never truly actually experience. However, he believes that what he calls “unconscious” artificial intelligence—what might be described as capable of two-dimensional as opposed to three-dimensional responses—might be doable. He writes:
Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near knowing how to build one. They are missing the most important fact about thought: the “cognitive continuum” that connects the seemingly unconnected puzzle pieces of thinking (for example analytical thought, common sense, analogical thought, free association, creativity, hallucination). The cognitive continuum explains how all these reflect different values of one quantity or parameter that I will call “mental focus” or “concentration”—which changes over the course of a day and a lifetime.Gelernter explains the difference between conscious thinking and unconscious machine thought:Without this cognitive continuum, AI has no comprehensive view of thought: it tends to ignore some thought modes (such as free association and dreaming), is uncertain how to integrate emotion and thought, and has made strikingly little progress in understanding analogies—which seem to underlie creativity.
In conscious thinking, you experience your thoughts. Often they are accompanied by emotions or by imagined or remembered images or other sensations. A machine with a conscious (simulated) mind can feel wonderful on the first fine day of spring and grow depressed as winter sets in. A machine that is capable only of unconscious intelligence “reads” its thoughts as if they were on cue cards. One card might say, “There’s a beautiful rose in front of you; it smells sweet.” If someone then asks this machine, “Seen any good roses lately?” it can answer, “Yes, there’s a fine specimen right in front of me.” But it has no sensation of beauty or color or fragrance. It has no experiences to back up the currency of its words. It has no inner mental life and therefore no “I,” no sense of self.As a consequence, any computer or robot would actually not be conscious, but no matter how dazzling its responses, remain a mere machine. Such a machine would thus not present us with the problem of according it human-equivalent moral status, the prospect of which some enjoy raising in discussions of human exceptionalism and personhood theory. He also points out the folly of attempting to create a truly conscious machine, believing that if it could be accomplished, it would be cruel, pointing out that in any event, “No such mind could even grasp the word “itch.”
An unconscious machine intelligence could be a useful tool in teaching humans about the workings of the brain. But it would be just that, an inanimate object, a machine, a very valuable piece of property—nothing more.
Perhaps it is time to put the AI argument against human exceptionalism to bed and focus on ensuring that human rights apply to all of us—not just those who are able to hurdle subjective barriers to full inclusion in the moral community.