The left and the right in American legal thought are more alike than different. They are united in their skepticism, especially their skepticism concerning values. Justice Oliver Wendell Holmes Jr. sounded the principal theme of twentieth-century jurisprudence when he wrote that moral preferences are “more or less arbitrary. . . . Do you like sugar in your coffee or don’t you? . . . So as to truth.” Judge Learned Hand added that values “admit of no reduction below themselves: you may prefer Dante to Shakespeare, or claret to champagne, but that ends it.” Hand insisted that “our choices are underived” and that “man, and man alone, creates the universe of good and evil.”
Taxonomies of legal scholarship place both law and economics and public choice theory on the political right and feminist jurisprudence, critical legal studies, and critical race theory on the left. Nearly all of the scholars associated with these movements, however—like most of the pragmatists and functionalists in the center—echo Holmes and Hand.
Justice Holmes’ letters and speeches revealed the depth of his skepticism:
• I don’t see why we mightn’t as well invert the Christian saying and hate the sinner but not the sin. Hate . . . imports no judgment. Disgust is ultimate and therefore as irrational as reason itself—a dogmatic datum. The world has produced the rattlesnake as well as me; but I kill it if I get a chance, as also mosquitoes, cockroaches, murderers, and flies. My only judgment is that they are incongruous with the world I want, the kind of world we all try to make according to our power.
• I think “Whatsoever thy hand findeth to do, do it with thy might” infinitely more important than the vain attempt to love thy neighbor as thyself.
• I see no reason for attributing to a man a significance different in kind from that which belongs to a baboon or to a grain of sand.
• I take no stock in abstract rights . . . [and] equally fail to respect the passion for equality.
• I think that the sacredness of human life is a purely municipal ideal of no validity outside the jurisdiction. I believe that force, mitigated so far as may be by good manners, is the ultima ratio . . . . Every society rests on the death of men.
• Doesn’t this squashy sentimentality of a big minority of our people about human life make you puke? [That minority includes people] who believe there is an onward and upward—who talk of uplift—who think that something in particular has happened and that the universe is no longer predatory. Oh bring in a basin.
• You respect the rights of man—I don’t, except those things a given crowd will fight for.
• All my life I have sneered at the natural rights of man.
Although Holmes said that he had values and called these values his “can’t helps,” his values were difficult to identify. They included only one political cause, his “starting point for an ideal for the law”: “I believe that the wholesale social regeneration which so many now seem to expect, if it can be helped by conscious, coordinated human effort, cannot be affected appreciably by tinkering with the institution of property, but only by taking in hand life and trying to build a race. That would be my starting point for an ideal for the law.”
Holmes wrote of “substitut[ing] artificial selection for natural by putting to death the inadequate” and of his contempt for “socialisms not prepared . . . to kill everyone below standard.” He declared, “I can imagine a future in which science . . . shall have gained such catholic acceptance that it shall take control of life, and condemn at once with instant execution what now is left for nature to destroy.” He spoke of the possibility of a future civilization “with smaller numbers, but perhaps also bred to greatness and splendor by science,” and he wrote, “I can understand saying, whatever the cost, so far as may be, we will keep certain strains out of our blood.”
Recent writings about Holmes describe him variously as “the great oracle of American legal thought,” “the only great American legal thinker,” and “the most illustrious figure in the history of American law.” Holmes’ view that rights are the bones over which people fight has found many champions in the academy.
On the left, members of the critical legal studies (CLS) movement are the heirs of Holmes. They take as their motto, “Law is politics,” an aphorism that does not seem to refer to the politics of civic virtue. For CLS scholars, politics is who you cheer for. The white male elite who wrote the laws cheered for themselves; CLS cheers for their victims. That is all there is to it.
Many radical feminists and critical race theorists are also the heirs of Holmes. They deny the possibility of a principled response to the unprincipled subordination of minorities and women. “Feminism does not claim to be objective, because objectivity is the basis of inequality,” writes Ann Scales. “Feminism is result-oriented. . . . When dealing with social inequality there are no neutral principles.” Catharine MacKinnon adds, “When [sex inequality] is exposed as a naked power question, there is no separable question of what ought to be . . . . In this shift of paradigms, equality propositions become no longer propositions of good and evil, but of power and powerlessness, no more disinterested in their origins or neutral in their arrival at conclusions than are the problems they address.” Kimberle Crenshaw, a founder of critical race theory, describes this movement as “grounded in a bottom-up commitment to improve the substantive conditions” of minorities rather than in opposition to “the use of race . . . to interfere with decisions that would otherwise be fair or neutral.”
On the right, “public choice” theorists are the heirs of Holmes. They see most legislation as the product of the capture of the legislature by whatever group bid highest. Among the themes of public-choice studies have been that laws ostensibly designed to prevent securities fraud were the product of bankers’ fears of losing savings deposits, that the votes of federal trial judges on the constitutionality of sentencing guidelines were affected by the prospects of the judges’ appointment to appellate courts, that nineteenth-century politicians allowed fires to rage out of control to encourage the establishment of patronage-generating fire departments, and that members of Congress have supported the creation of federal bureaucracies in order to gain political support by aiding constituents who would be treated unfairly by these bureaucracies.
In their normative writings and in their descriptions of common-law doctrine, law and economics scholars generally moderate the view of law as an unprincipled dogfight that characterizes their analysis of the work of legislatures and administrative agencies. The economists substitute a milder version of the skeptical creed, a perspective commonly described as “applied utilitarianism.” But the most prominent of the law and economics scholars, Richard Posner, resists this characterization. Posner maintains that welfare economists seek the maximum satisfaction not of all human desires but only of desires backed by wealth. He claims that by adding a wealth constraint to classical utilitarianism, he has made his moral relativism less troublesome than Jeremy Bentham’s. In Bentham’s utilitarian world, a sadist who derived enough pleasure from torture to outweigh his victim’s suffering would be justified in tormenting her. Jeremy’s joy could excuse Alice’s anguish. As Posner describes his own vision of justice, however, a sadist “would have to buy [his or her] victims’ consent, and these purchases would soon deplete the wealth of all but the wealthiest sadists.”
As Arthur Leff summarized both Posner’s bottom line and Bentham’s, “Each of the Seven Deadly Sins is as licit as any of the others, and as any of the Cardinal virtues.” In the absence of an external source of value, all human desires have become, for economists and many others, equivalent to one another. The pleasure of pulling the wings off flies ranks as highly as that of feeding the hungry. For economically minded scholars, the function of law, the market, and other social institutions is to achieve the maximum satisfaction of human wants regardless of their content (or, if you prefer Posner’s variation, to achieve the maximum satisfaction of desires backed by wealth).
Without the possibility of God, writers like Posner see no escape from their moral skepticism, yet their viewpoint falsifies everyday human experience as much as it does religious tradition. As the theologian James Gustafson observes, people are “valuing animals.” They use their mental capacities to reflect about principles, duties, obligations, and ends. They examine critically the objects of their valuation. They rank desires on more than a scale of intensity. They sense that some desires are not simply stronger but more worthy than others, and they struggle toward a proper ordering of the objects of desire.
The skepticism of welfare economists and utilitarians, however, is only partial. They attempt to put on the brakes before skidding into skeptical free fall. Although utilitarianism views all desires as commensurable with all other desires (and although law and economics treats all desires as commensurable with money), utilitarianism remains an ethical system. It is egalitarian, ranking everyone’s happiness as highly as the king’s. (Richard Posner’s wealth-driven version of the creed, however, abandons this virtue.) Moreover, utilitarianism sometimes demands altruism—for example, by requiring a not-very-happy middle-aged person to sacrifice her life to save the lives of two blissful children on the opposite side of the globe, thereby maximizing the aggregate amount of happiness.
But what makes this system just? Despite Bentham’s insistence that “natural rights are simply nonsense . . . nonsense on stilts,” does even this subjectivist ethical system require external justification? Can utilitarianism, wealth maximization, or any other ethical system make sense in the absence of “a brooding omnipresence in the sky”? Can one truly brake ethical skepticism halfway down? What’s so great about egalitarianism, altruism, and human (or hippopotamus) happiness anyway?
In the absence of external justification, the desire that everyone’s desires be satisfied is just another desire, and utilitarianism can be no more than a taste. Jeremy Bentham and his followers hoped to maximize happiness; Richard Posner and his followers seek to maximize wealth; and Oliver Wendell Holmes preferred taking life in hand and trying to build a race. Different strokes for different folks. People give up a lot when they give up on God.
Legal scholars of both the left and the right have tilted from Socrates on the issue that marks the largest and most persistent divide in all of jurisprudence. In Athens, four hundred years before Christ, Socrates contended that justice was not a human creation but had its origin in external reality. Thrasymachus disagreed; he insisted, “Justice is nothing else than the interest of the stronger.” Ethical skepticism had a number of champions in the centuries that followed—for example, Thomas Hobbes, who wrote in 1651: “Whatsoever is the object of any man’s appetite or desire, that is it which he for his part calleth good: and the object of his hate and aversion, evil . . . . For these words of good [and] evil . . . are ever used with relation to the person that useth them: there being nothing simply and absolutely so.”
Throughout Western history until the final third of the nineteenth century, however, the views of such moral realists as Cicero, Gratian, Accursius, Bonaventure, Duns Scotus, Thomas Aquinas, Hugo Grotius, Samuel Puffendorf, Edward Coke, John Locke, William Blackstone, Thomas Jefferson, and Abraham Lincoln dominated European and, in later centuries, American law. The twentieth century may have given moral relativism its longest sustained run in Western history. (As I use the term, moral realism refers to a belief in mind-independent moral principles. I use this term and the term natural law interchangeably. Similarly, I do not distinguish between moral skepticism and moral relativism. I use both terms to denote the view that moral principles are the product of individual or group preferences without any grounding in external reality.)
I am not knowledgeable enough to confirm or refute the claim that the decline of societies always has coincided with a weakening or abandonment of the idea of natural law. Similarly, I do not know the extent to which intellectual movements shape society or whether they do at all. Nevertheless, in the words of scholar Albert Borgmann, “the nation’s mood is sullen.” The vices of atomism, alienation, ambivalence, self-centeredness, and vacuity of commitment appear characteristic of our culture. And when Perry Farrell, the lead singer of the rock band Porno for Pyros, shouts the central lyric of twentieth-century American jurisprudence, “Ain’t no wrong, ain’t no right, only pleasure and pain,” another storm cloud may appear on the horizon. Perhaps, amid signs of cultural discouragement and decay, one should expect to hear this lyric from orange-haired, leather-clad rock stars as well as from Judge Posner.
The current ethical skepticism of American law schools (in both its utilitarian and law-as-power varieties) mirrors the skepticism of the academy as a whole. Many pragmatists, abandoning the idea that human beings can perceive external reality (right, wrong, God, gravity, suffering, or even chairs), maintain that the only test of truth is what works or what promotes human flourishing. At the end of a century of pragmatic experimentation, that standard has given us a clear answer: pragmatism and moral skepticism don’t do either. They are much more conducive to despair than to flourishing. They fail their own test of truth.
As pragmatism foundered, twentieth-century philosophy supplied a new and better test of truth. This epistemology goes by many names—coherency, reflective equilibrium, holism, and inference to the best explanation. (I do not suggest that these terms are synonyms-only that they have much in common and that they capture aspects of the same reasoning process.) As Michael Moore describes the core idea, “Any belief, moral or factual, is justified only by showing that it coheres well with everything else one believes . . . . One matches one’s own particular judgments with one’s more general principles without presupposing that one group must necessarily have to yield where judgments and principles contradict each other.” In effect, current epistemology portrays analogy, induction, and deduction as a single continuous process. It emphasizes the complex, holistic, provisional, and nonfoundational nature of human reasoning.
How people regard the process of knowing (epistemology) bears on what kinds of things they think exist (ontology), and today’s epistemology has the potential to reshape moral reasoning and law. Older images of human reasoning set up morals for a kill. They suggested that “logic” always could be pushed to a “premise” and that reaching this premise ended the game. At this end point, it was everyone for himself.
As Justice Holmes expressed this viewpoint: “Deep-seated preferences cannot be argued about—you cannot argue a man into liking a glass of beer—and therefore, when differences are sufficiently far reaching, we try to kill the other man rather than let him have his way. But that is perfectly consistent with admitting that, so far as it appears, his grounds are just as good as ours.”
Whether people claimed that their premises came from God or admitted that they made them up, their values were the product of “can’t helps””of leaps of faith unaided by reason. In the end, one could merely assert one’s own personal, existential faith”a faith that might not move any other rational person and that one was likely to assert apologetically and without conviction or with indefensible conviction. Unreasoning faith appeared unavoidable for everyone engaged in normative discourse, religious believer or not.
Religious believers today (the coherentists among us anyway) rely less on blind faith. We despair far less of human reason. People who consider us more dependent on faith and less committed to reason than our disbelieving colleagues often have things backward. More than four hundred years ago, John Calvin recognized the holistic and experiential basis of theological reflection: “Nearly all the wisdom we possess, that is to say, true and sound wisdom, consists of two parts: the knowledge of God and of ourselves. But, while joined by many bonds, which one precedes and brings forth the other is not easy to discern.”
Today’s coherentist epistemology emphasizes that premises do not come from nowhere. They are usually the products of efforts to generalize our experience of the world. We may do this job of inference well or poorly—perceiving patterns sharply or dimly, misperceiving them, or missing them altogether. We move from induction to deduction to induction to deduction in continuous and, we hope, progressive spirals. We generalize, test our generalizations against new experience, then generalize again. Human reasoning does not resemble the operations of a hand-held calculator as much as it does the workings of a computer ranging over a large set of ever-changing data to determine a “best fit” line.
Gilbert Harman puts it this way: “If we suppose that beliefs are to be justified by deducing them from more basic beliefs, we will suppose that there are beliefs so basic that they cannot be justified at all.” Traditional images of human reasoning portray the most important of our beliefs as the least justified. Harman underlines the new epistemology’s response: “These skeptical views are undermined . . . once it is seen that the relevant kind of justification is not a matter of derivation from basic principles but is rather a matter of showing that a view fits in well with other things we believe.”
One implication of this holistic vision of human understanding is that we need no longer mark a sharp divide between faith and reason. Older views declared that unless some “proof” of the existence of God was conclusive, belief in God could rest only on faith or revelation. At the beginning of the twenty-first century, however, the name of the game is not indisputable “proof”; it is inference to the best explanation.
This vision offers a clearer perspective on many theological issues—among them, the argument from design. It does so at the same time that twentieth-century science has multiplied the evidence in support of this classic argument. As scientists reveal phenomena as wondrous as anything in the Book of Genesis, we begin to perceive a more awesome God than God the Watchmaker, the already inspiring God of Newtonian physics. God, more fully revealed but ever more mysterious, is the God of the big bang and the expanding universe and of order in chaos and chaos in order. Moreover, the new epistemology, by emphasizing the unity of emotional and cognitive ways of knowing, has ended the banishment of our sense of wonder, reverence, dependence, and gratitude from our “reasoning” process.
Twentieth-century philosophy suggests the partial collapse of induction and deduction, of cognition and emotion, and of faith and reason. It also suggests the collapse of fact and value. Once we understand that we come to moral conclusions in the same way that we come to empirical conclusions, ascribing a higher ontological status to judgments of fact than to judgments of value cannot be justified.
An illustration may make this proposition less abstract:
I once saw a man beat a horse. Among my thoughts on that occasion were:1. That is a horse.2. The horse is in pain.3. The man’s act is cruel.4. The man’s act is wrong.
Which, if any, of these things did I know?
Rays of light stimulated my optic nerves, and my brain interpreted the resulting electronic impulses, saying, “That is a horse.” Yet my interpretation of the world could have been erroneous. I might have seen a mule, or it might have been a hologram. It might even have been a sensory impression fed to me, a brain in a vat, by a mad scientist.
Although people in my culture could have constructed other taxonomies, they had taught me a useful category, horse. In an instant and without bringing most of the process to consciousness, I considered the available data—color, size, configuration, movement, and more. I determined that the “horse hypothesis” fit. As Nelson Goodman observes, “Facts are small theories, and true theories are big facts.” Facts and theories are both interpretations of experience, but it does not follow that one interpretation is as good as another. Someone who saw what I saw and said, “Why, that man is beating a frog,” would not have gotten it right.
I often revise my interpretations of the empirical world. At one time, I might have thought that the stick protruding from the beaker of water was bent. I now infer that the same stick is straight. Whole cultures do what I do. Everyone once believed that the world was flat and the sun went around it; those hypotheses offered the simplest explanations of what they saw. But careful people then made closer observations. The old hypotheses did not fit, and the culture accepted new ones.
I knew that the horse was in pain in the same way that I knew it was a horse. The word pain gives a single name to many experiences. It speaks of a pattern in the world just as the word horse does. I envisioned what I would have experienced had the man whipped me, and some of the horse’s visible responses to the whip were like those I might have exhibited. I inferred that the horse and I were members of the same interpretive community; we both knew the meaning of the whip. My judgment that the horse was in pain was, like most other judgments about the world, a potentially fallible inference, but I have enough confidence in this inference to call it knowledge. I knew that the horse was in pain.
When I concluded that the man’s act was cruel, I again inferred a creature’s state of mind from external circumstances. This time it was the man’s state of mind rather than the horse’s. Like many other words, the word cruel combines an empirical and a normative judgment within a single concept. Words with this characteristic are sometimes called thick ethical concepts. They illustrate that people rarely pause to distinguish fact from value in trying to make sense of their experience.
People who have followed me this far may get off before the last stop. They may agree that horses are real, that pain is real, and that cruelty is real. But right and wrong? Those things are not “real.” They’re subjective, relative, contingent, and socially constructed. If someone does not believe that horses are real, that horses can experience pain, or that people can be cruel, that person is crazy. But if this person does not believe that beating horses is wrong, why, that is this person’s choice. Right and wrong are matters of personal or, at most, community taste.
It’s a strange place to get off. The thought that the man’s act was wrong rushed into my mind as quickly and forcefully as the thoughts, “That is a horse,” “The horse is in pain,” and “The man’s act is cruel.” Moreover, the word wrong categorizes and simplifies experience in the same way that the words horse, pain, and cruelty do. In what sense could the latter terms, but not the former, be regarded as capturing external reality? Why should I give greater credence (or different credence) to the first three judgments than to the last? What error lies in regarding all of the words that leapt to my mind as capturing genuine characteristics of what I saw? People consider the unseen forces of Einstein’s physics real because hypothesizing these forces unifies and explains a variety of human experiences. I believe in God, right, and wrong for precisely the same reason.
On March 18, 1989, when I was forty-eight, I knelt beside two other men of equally advanced age and became a Christian. My baptism would have astonished most of the people who had known me in my adult life. It was prompted partly by circumstances of the sort that commonly lead adults to reconsider their direction (in my case, concern and guilt about a child in trouble and distress about a recently failed romance). It was prompted, too, by a teacher, J. B., who succeeded in making sense of things that never had made sense to me before. Finally, odd though it may seem, my conversion from agnosticism to Christianity was prompted by long-standing dissatisfaction with legal and other academic thought as I had experienced it throughout my career. My professional experience had taught me that law not grounded on a strong sense of right and wrong was lousy law. Thinking about law, like thinking about most things, can lead one to God.
In recent years, a resurgence of legal and philosophical writing about moral realism has made natural law respectable again. The image of returning may be appropriate. A child’s trust in her parents is likely to give way to adolescent rejection; and when the child becomes an adult, her adolescent rejection is likely to give way to a new and wiser appreciation of her parents’ virtues. In the same way, lawyers of our new century may develop a new, more mature appreciation of their natural-law heritage—a heritage that lawyers of the past century mostly dismissed as “transcendental nonsense.” The iconoclasm and skepticism of the twentieth century may then yield to a new idealism and a new spirituality.
In the interim, we should stay with T. S. Eliot:
There is only the fight to recover what has been lost And found and lost again and again; and now, under conditions That seem unpropitious. But perhaps neither gain nor loss. For us, there is only the trying. The rest is not our business.
Albert W. Alschuler is the Wilson-Dickinson Professor at the University of Chicago Law School and author of Law Without Values: The Life, Work, and Legacy of Justice Holmes (University of Chicago Press, 2001). This article is adapted from a chapter in Christian Perspectives on Legal Thought, edited by Michael W. McConnell, Robert F. Cochran Jr., and Angela C. Carmella, just out from Yale University Press.