Legally speaking, we live in strange times. Late-twentieth-century American law has a strong libertarian streak––stronger than ever in our past and getting stronger all the time––which shows up in the law’s treatment of contentious moral topics like abortion, medical treatment for the terminally ill, divorce (though libertarianism is fading a bit here), and the contents of the local drugstore’s magazine rack. In these spheres, the reigning legal ideology is something close to laissez-faire.
Yet late-twentieth-century America is also a society with a large and interventionist state. Our law reflects this in a host of ways. We take for granted legal regulation of every aspect of our economic lives––consider the detail with which the labor and tax laws regulate employment of babysitters (the bane of might-have-been Attorneys General). The law also aggressively regulates our social lives––consider the antidiscrimination laws that cover all contractual relationships (not just employment), bar discrimination based on a large and growing number of criteria (not just race), and often require careful accommodations to members of potentially discriminated-against groups. In other words, we are both more libertarian and more regulated than we have ever been. What gives?
I think a large part of the answer goes to a pair of attitudes about law and what it can accomplish. We are both too optimistic and too pessimistic. The mushrooming of ever more detailed and intrusive legal regulation in the last generation or so reflects the idea that smart lawyers can order the world not just reasonably well, but optimally. A kind of legal fine-tuning that used to be rare is now common, as though with a legal nip here and a tuck there, we can get people to do precisely what they ought to be doing. Meanwhile, the libertarian streak in our law reflects a different and much more pessimistic idea: that we are collectively incompetent to decide what is right ––what magazines school children should and shouldn’t read, what kinds of treatment pregnant women should and shouldn’t give their not-yet-born offspring, what kinds of medical choices are and aren’t appropriate for very sick but not-yet-dead patients, and so forth.
This combination of extreme optimism about the law’s ability to shape behavior and extreme pessimism about the law’s ability to draw fairly basic moral lines is sometimes thought to be a consequence of living in a liberal democracy. The idea is that in a democracy the people must be free to govern (hence all the legal fine-tuning), but in a liberal society there must be places the law cannot go (hence our vast array of legal rights, the places where libertarianism reigns). This position, though common, is wrong, as a look at other democracies or our own history shows. The truth is that both legal optimism and legal pessimism are based not on anything so grand as the nature of a free society, but on a reigning view about what legal institutions are and aren’t good at. Today, we think those institutions are good at fine-tuning the behavior of everyone from automobile manufacturers to police officers. But we think the same institutions are bad at drawing moral lines––which is why we think such lines ought not to be drawn if even small minorities might not like them.
This pair of attitudes is mostly backward. The law generally does a bad job of fine-tuning. Legal regulation is like a bombing run before the days of computer-guided missiles. The law can usually hit some part of a very large target, and can reliably damage something in the general area of the target, but the surgical strike––taking out just what you want taken out while leaving the rest of the landscape untouched––is almost always beyond its capacity. One thing the law can do, though, is to state basic norms, to reinforce principles that can guide people in their decisions about how to order their lives. Telling people precisely what to do tends not to work, for all the reasons that command economies do not work. But giving them norms for deciding what to do––or, more precisely, reminding them of the norms they already believe in––is what law is best at.
Consider a typical example of contemporary legal fine-tuning. A lot of American cities have public swimming pools. Wherever there are swimming pools, there are tragic accidents: children drown. And whenever a child drowns in a swimming pool, the tragedy looks preventable. If only there had been a lifeguard on duty, or if only the lifeguard had been paying better attention, or if only the city had put up signs warning people not to dive in the shallow end of the pool, or.... Until recently, cities weren’t liable in these cases. Often they were immune from damages for ordinary torts. Where that was not so, either the victim or his parents were generally guilty of some kind of negligence themselves (people don’t usually drown in swimming pools if they’re behaving themselves properly) and until recently, a negligent plaintiff couldn’t recover damages.
A couple of decades ago, this changed. Courts looked at these cases and saw tragedies that need not have happened. If city governments could be induced to take more precautions with their lifeguards, we could save kids’ lives. So courts began to hold cities liable, sometimes for a great deal of money.
What was supposed to happen was clear. We were supposed to get better lifeguards, more warning signs, safer pools––some of which did happen. But even with better lifeguards and more signs, there is some risk that a child will drown. When that happens, there is also some risk of very large damages. Unless the citizenry is willing to pay higher taxes (and the taxes are mostly paid by wealthier residents, who are not the ones who use municipal pools and who therefore may not be interested in ponying up more money for liability insurance), or unless the people using the pool want to pay much higher fees (and in many places they can’t), cities can no longer afford municipal pools, or at least not as many pools as they otherwise could have afforded.
Perhaps that is not such a bad thing––it may mean fewer chances for poor people to swim, but it also means fewer chances for poor people’s kids to drown. But of course the kids aren’t inactive when they would have been swimming. Every parent knows that keeping a child busy can be a very good strategy for keeping him out of trouble. The fewer healthy activities available for poor kids, the more unhealthy activities will attract their attention. The choice is not really between a very few drownings in municipal pools and nothing, but between a very few drownings and whatever else poorer-than-average kids might do on the streets of American cities. So the law may have given us safer pools than we would have had without expanded tort liability, but at the cost of a few more child drug addicts, and perhaps a few deaths of a different sort. If fewer recreational facilities mean more kids doing drugs on the street, we may not be saving lives at all.
One sees this pattern again and again in American law over the past thirty years: abandonment of legal limits on liability in the interest of getting people to behave just right, with ambiguous or perverse results. Courts expand the scope of liability to protect injured consumers, and people in need of drugs cannot get them because potential manufacturers are afraid of suit. Courts broaden the use of damages for “pain and suffering” to compensate for nonmonetary injuries, and doctors get out of obstetrics because malpractice insurance is too high. Congress and courts together expand employers’ potential liability for firing employees unjustly, and employers become wary of hiring people in protected classes for fear that discharge will be impossible. These developments were not the product of some venal desire to give lawyers more business, or indeed of any bad motive at all. The courts acted to right wrongs, to compensate injured people, and to eliminate needless tragedies. But the world is an endlessly complicated place, and it is very hard to get someone to behave in just the right way without affecting someone else’s behavior, and very hard to know exactly what the effects will look like. The result is often to trade some tragedies for others, or simply to make more tragedies.
A second example comes from the criminal process. Before 1961, the law of search and seizure (the law that governs when and where and how police officers can look for evidence of crime) was surprisingly thin. Basically, the police were supposed to get a warrant before searching a house, but otherwise there wasn’t much that the law clearly forbade. And even the warrant requirement meant little in practice, because nothing much happened to officers who disobeyed it.
Beginning in 1961, the Supreme Court changed all that. Now if evidence is illegally obtained, it is usually inadmissible in a subsequent criminal trial. Police officers have to pay vastly more attention to the law, and (not coincidentally) there is vastly more law to pay attention to. Today, one can fill a bookshelf with reference volumes devoted to the law governing things like when the police can open the glove compartment of my car, or look inside my lunch bag, or feel the contents of my jacket pocket.
Just as expanded liability was supposed to give us safer swimming pools, so it is fairly clear what all this search and seizure law was supposed to produce: better trained, better behaved police. And the police in America are better trained and better behaved than they used to be, though it is impossible to tell how much of that is due to changes in search and seizure law. (There are certainly other forces that would have pushed toward better policing even in the absence of the law’s intervention.)
But this surge in legal regulation of the police has had some unintended effects. Criminal defense lawyers in America are mostly state-appointed, poorly paid, and overworked, and they do not have time to do much for their clients. They are like doctors doing triage at a military hospital after a battle: they have to pick and choose which claims to raise and which cases to contest, and the pool of claims and cases is large. Because of all the rules about police evidence gathering, overburdened public defenders find search and seizure claims a good investment. But those claims displace something––as with the kids and the swimming pools, those public defenders would be doing something else if they weren’t filing motions to suppress evidence. Some of what they would be doing is raising arguments and digging for evidence that might suggest their clients’ innocence. That, of course, requires much more investigation. And after all, the large majority of their clients are not innocent, so most of the investigation would not be helpful. But sometimes it would––some haystacks are just haystacks, but a few really do hold needles.
Today some number of innocent men and women sit in prison because their lawyers did not take the time to look hard at the facts of their cases, and instead just pressed them to plead guilty and get it over with. That number is higher because of our elaborate law of search and seizure, which siphons off lawyer time and court time from more important issues. We wind up with somewhat better behaved police (though not, if the Rodney King incident is any indication, in the area where we most need better behaved police) at the cost of reducing the level of attention paid to whether defendants did the crimes for which they are charged. The people who lose the most in this tradeoff are those who are both poor and innocent”the very ones who most need the law’s protection.
These phenomena are produced by an attitude toward law and its creation, a bias in favor of legal fine-tuning as the solution to life’s otherwise intractable problems. In most of the areas where legal regulation has expanded over the past few decades, there are real injustices to attack, but there is also great potential for making things worse. In these settings, the costs of legal error are high. The modern American pattern, though, is to embrace the goal of remedying harms, and to downplay error costs.
This is very much a contemporary bias; the classical common law tended to tilt in the opposite direction. Common-law liabilities were coupled with limits that made suit difficult. Successful plaintiffs were few and getting into court was hard. Doctrines governing things like causation (Did this defendant bring about this harm?) and standing (Is this plaintiff entitled to complain about it?) did away with many potential claims. Those who surmounted these hurdles generally found no pot of gold at the end of the rainbow; common-law doctrines on remedies were, on the whole, stingy. The strong pattern was to err on the low side, giving the wronged plaintiff something, but less than he lost.
The story was similar when it came to the various bodies of law that regulated street-level government officials––police officers, school principals, providers of social services, and the like. Various state-law immunities protected those officials from most legal liability. And prior to the 1950s and 1960s, the huge engine of constitutional rights, the source of most regulation of government actors today, largely didn’t exist. Local custom usually governed.
These limits on legal regulation seem unsatisfying today, because they ruled out many deserving claims and because they undercompensated those claims that made it through the court system. But these limits also made it very hard to make it to court with bogus claims, a virtue our legal system has lost. The common law did not aim to compensate every wronged party. Instead, it aimed to ensure that only wronged parties would be compensated, that they would be compensated only by wrongdoers, and that the compensation would not be so generous as to encourage fraudulent claims. The common law’s many limits on liability and recovery did a reasonably good job of accomplishing that set of modest goals. Though the system plainly did some good, much of what it strived for was to avoid doing harm.
Today courts worry much less about the harm, and are much more optimistic about their ability to do good. This last tendency is tied, in part, to another contemporary development: the greater role legal theory plays in shaping legal rules. The law that courts make used to be law that arose as a byproduct of deciding specific cases. The genius of the common law was that the focus was on the cases, not the legal rules the cases spawned. Over the course of this century, and especially over the course of the past forty years, courts and lawyers and law professors have turned their attention ever more toward the rules. More and more, the cases are the byproduct, the occasion for lawmaking. The law is the main event. One sees this in Supreme Court opinions that state the facts of the case at hand and then never mention them again, discussing only the abstract legal question the Court is deciding.
One also sees it in the mushrooming of public interest groups who seek out “test cases”––occasions to challenge some law or practice they don’t like. Practices like these treat judicial lawmaking as an exercise in abstraction, not a necessary incident to resolving particular disputes between particular people. And as law becomes more of an abstract enterprise, theory becomes more central to law.
This rise in the status of theory highlights the essential quality of much contemporary American judging: its arrogance, its confident belief in its own ability to sort out the complicated crosscurrents of ordinary life. Meanwhile, the most striking thing about the older common-law system, a system whose forms we retain but whose attitudes have all but disappeared, is its humility. Legal rules emerged from experience, not from the theorizing of proud judges or professors. The legal system’s vision of itself was not particularly grandiose. Disputes were supposed to be dealt with privately, out of court, if at all possible––hence the difficulty of actually getting a lawsuit into court and before a jury. Not all wrongs were to be righted––at least not in this life. Judicial decisionmaking was not a virtue but an unfortunate necessity, something we have to have in a world filled with sinful and contentious people, but something we should have as little of as we can get away with. Law was seen not as a way of perfecting the world, but as a means of keeping a lid on the worst sorts of behavior.
This legal system of the past was part of a larger system for regulating conduct––a system dominated not by legal rules but by private norms. The role of law was to work with these norms, to reinforce them, to give them a nudge, but not to take on itself the job of getting everyone to behave just as they ought. The virtue of a system of this sort is something legal theorists are only now recognizing, after two generations of rampant legal fine-tuning. The title of Robert Ellickson’s wonderful book on the subject, Order Without Law, is suggestive, but does not get the relationship quite right. Law has always been an important part of the social order; but until recently it wasn’t the heart of the social order. And in the system of one or two centuries ago, law’s prime role was not necessarily to regulate. It was to teach.
A great part of what it taught was morals. This doesn’t mean that the common law forbade all outward sin. Rather, the law rested on accepted moral norms, and its structure and rhetoric aimed to reinforce those norms even when a violation would not give rise to a legal claim. The law of contract was basically about keeping one’s promises; it taught honesty and fair dealing. The law of tort was about taking account of others’ interests even as one advances one’s own; it taught adherence to the Golden Rule. The law of property was about honoring others’ ownership of things and use of one’s own things in ways that were not wasteful; it taught responsibility and the avoidance of covetousness.
I don’t wish to be guilty of romanticizing the legal order of centuries past. The legal system Blackstone knew, like the men who made it, was far from perfect; it was filled with bizarre rules and limits––fodder for Bentham’s attacks and Dickens’ satires––and it allowed for many injustices, especially on matters concerning race and sex. But most of what it did was tied, fairly explicitly, to basic moral precepts, to things like honesty, responsibility, unselfishness, and diligence. As a consequence, even though the legal system of Blackstone’s or Dickens’ day saw less litigation that we see now, their system probably did more than ours does to support the social order by reinforcing the kinds of basic moral lessons that good parents try to teach their children.
In most areas of law, this kind of instruction is disappearing. Today courts are pessimistic about moral argument, about even the possibility of a civil conversation about right and wrong. As a consequence, the law mostly shies away from moral principle, grounding itself instead in social-engineering arguments. The law of contract, tort, and property are less about honesty, unselfishness, and responsibility, and more about the efficient allocation of resources and the reduction of accident costs. Criminal law is less about punishing evildoers than about maintaining social control. These things are not in themselves bad––reducing accident costs is a good thing, as is maintaining a fair level of social order. But we should recognize the possibility that these sorts of social goods work better as byproducts of a healthy legal system than as the system’s goals. Resources may have been allocated more efficiently in a system that worried primarily about honesty and responsibility, and only secondarily about allocating resources. And we surely had better social control in a system that worried more about identifying and punishing evil than about maintaining social control.
Like legal fine-tuning, this shying away from morals is a contemporary phenomenon. A century ago Oliver Wendell Holmes, the great legal skeptic of his age, talked about law as being nothing more than a prediction of what the courts would do on particular facts. He captured this perspective with the image of law seen through the eyes of “the bad man,” the man who was interested in doing whatever he could get away with. Note that even as he advanced this modern, skeptical perspective, Holmes had to use moral talk––the bad man––to be understood. Today, the system has adopted his perspective but dispensed with his terminology, for in order to talk about the “bad man,” one must have some sense of what “bad” means. And, of course, if the legal system does not know what “bad” means, then talking about morals in law looks like a smokescreen, a veil to conceal the system’s hammer coming down on people it doesn’t like.
That is exactly what the Supreme Court said this past summer about Colorado’s Amendment 2, the state constitutional provision that barred local gay rights ordinances. The state defended that provision in the only way it could, by saying it was a way for the majority of Colorado’s citizens to reinforce right norms of sexual behavior. The Supreme Court did not know what to make of this defense. Justice Kennedy, the author of the Court’s opinion, seemed genuinely mystified as to what it could possibly mean to say that most Coloradans want to avoid giving legal protection to behavior they think wrong. The mystery is no accident; the idea is, in today’s legal system, a strange one. Kennedy translated it in the way that most judges would translate it today: He said that Amendment 2 was just an expression of animosity toward a group of people the majority didn’t like. Not surprisingly, given that view of the provision, the Court struck it down.
This is a good picture of how far we have traveled. A century ago, as Holmes well knew, “good” and “bad,” “right” and “wrong” were the profession’s language. Today, if you use that language, you are a bigot. Law is not for reinforcing the majority’s morals-indeed, that is, in the current mindset, about the most dangerous thing law could try to do. The growing number of decisions upholding a right to suicide reflect this judicial attitude, as do decades of cases upholding the right to abortion.
Now one might say here what I said earlier about the common law’s approach to civil liability generally: Perhaps this is just a sound humility at work; perhaps courts rightly distrust their qualifications for moral judgment. But this misperceives how moralism works in law. The common law of Blackstone’s day was not an engine for moral discovery, with courts creating new precepts for the population to follow in the way that today’s Supreme Court regularly gins up new constitutional rights. Courts did not make up the idea that people ought to be honest in their dealings with one another, or that property owners should use their property responsibly. Those were the norms of the society in which the courts operated. A Christian would say they came not from the judges but from the Father; a secularist might say they came from the traditions of the populace. Either way, blinding oneself to those norms is not an exercise in humility. On the contrary, it is the height of pride.
There is another sense in which judicial reticence about morals represents a false humility. In the arenas where moral laissez-faire holds sway, courts only pretend to avoid the issues they claim to be avoiding. Being willfully blind to morals is itself a moral posture, and not a very attractive one. Millions of young men and women think differently about the rights and wrongs of abortion than they would if the law took even a modest stand against it. Millions of parents think differently about the sexual content of the magazines and TV shows their children watch than they would if the law allowed some censorship. Especially in a secular culture, people take moral cues from the law–– “constitutionally protected” comes to mean “right,” or at least “acceptable.”
This captures the irony in the law’s posture. When it comes to abortion, or doctor-assisted suicide, or pornography, courts claim to be agnostic. The law purports to leave people free to take their own stands without the interference of legal signals. But the law itself continues to send very strong signals that if a behavior is constitutionally protected, it must be all right. Neutrality is a myth; it always resolves into some form of endorsement.
The result is a vicious circle, with the law and the culture reinforcing the worst in each other. Political majorities used to support what some of us would call a socially healthy level of censorship of the popular culture. Movies, television shows, and magazine racks reflected that level of censorship. The law played a fairly small yet important role; limits were mostly self-imposed, but they were imposed in a regime in which some legal limits existed in the background. Between the late 1950s and the early 1970s, the courts basically took the legal limits away by declaring them unconstitutional. The supply of sex and violence in the popular culture increased, and as it did the demand increased as well. The bounds of what is acceptable in public entertainment shifted dramatically. Today, popular norms have changed; new norms have replaced older ones, and the change has accelerated a variety of social pathologies. But one major reason for the change is that the law stopped reinforcing the older norms: it sent the message that things previously thought unacceptable might not be so bad after all. This kind of signal-sending is just as interventionist as the older regime of legally reinforced censorship. The problem is that it is intervention of the wrong kind.
As we have seen even in the past two generations, the right kind of intervention can accomplish a great deal. Consider race discrimination––one of the few areas of the legal system where moral talk is still common––and the civil rights legislation and court decisions of the 1950s and 1960s. Changes in public attitudes about the rights and wrongs of racism made legal change possible. And legal change reinforced and strengthened the change in public attitudes. Public attitudes and the law formed, for a time, not a vicious circle but a virtuous one. That is exactly how legal norm reinforcement works.
Note that the circle does not depend upon detailed legal fine-tuning. The great virtue of civil rights law before about 1970 was the simplicity of its message: Blacks and whites are human beings with equal moral status; they should be treated accordingly. That message was very powerful, and it helped produce powerful changes in the way a generation of Americans saw each other. Contemporary civil rights law, on the other hand, is mostly obsessed with various forms of affirmative action-the legal fine-tuning of race relations. The law’s message is no longer simple, and (not coincidentally) no longer powerful. This has led to a stop in the progression away from bigotry. None of which is to say that affirmative action is a terrible thing. But it is an ambiguous thing, an exercise in regulatory detail. Make it the issue, and suddenly the law of race relations no longer has the moral force it once did.
This point captures the benefit of the classical common law’s humility. On the one hand, restricting legal liability left many people legally uncompensated and many wrongs unredressed. On the other hand, restricting liability maximized the law’s authority––if only wrongdoers are liable, it really means something to be held liable. The recent history of the law of race relations tells the same story. The more the law stuck to the simple moral command––it’s wrong to think someone less human because of his skin color––the more powerful and successful it was. The more it got mired in regulatory detail, trying to make the world perfect, the more its power was sapped. Taking moral stands need not mean telling everyone what to do. Indeed, as soon as the law starts telling everyone what to do, the moral stand tends to disappear.
But isn’t legal intervention in moral debates intolerant? The short answer is yes, in the same way the Civil Rights Act of 1964 is intolerant. We are much less tolerant of racism than we were forty years ago, and a good thing too. But there is another sense in which using the law to reinforce morals is actually more tolerant than the legal regulation we see everywhere today. The kind of law that applies to municipal pools and police search and seizure, the kind that legally compels affirmative action, is anything but tolerant. In these areas, the law tells you what to do, and you’d better do it, or else. Legal moralism can afford to be more forgiving, because the existence of the line matters more than its precise location or the penalty for crossing it. Moralism need not mean imprisonment for sin. And, critially, tolerance need not mean neutrality. Indeed, the very word “tolerate” connotes disapproval, a decision to let something pass even though one does not like it.
There is a lesson here for the law. We can take stands as a society, express preferences, reinforce norms, and yet not throw people in jail for disagreeing. This is the path that constitutional law, with its laissez-faire, libertarian cast, has mostly ruled out. It is the path of heavier civil regulation of abortion, of marriage and adoption laws (and perhaps tax laws) that favor traditional families, of some public acknowledgment of majoritarian religious preferences––all without prison sentences for dissidents. Colorado’s Amendment 2 is a good example. It did not criminalize anything. Its victims were not barred from working or educating themselves, or from owning property, or from voting, or even from having sex as they wished. They remained free people, and rightly so-a point that cannot be emphasized too strongly. But the law still took a stand, albeit a very modest one. It drew a line.
That is probably the path we ought to follow in a society like this one, where people disagree so much about morals. It means finding solutions that seem unprincipled but are in fact ways of honoring two different principles at once. The first is the idea that law should strive for justice and justice embodies a sense of right and wrong. The second is the idea that people who disagree about fundamentals need to find a way to live together in peace. Today, we think that following the second principle means ignoring the first. But that is backward––ignoring law’s moral content means enshrining amorality, and legal amorality is not a recipe for social peace. A quarter century of Roe v. Wade has not made us a more harmonious society. The true relationship is the other way around: Having forgotten what justice is about, we naturally find it ever harder to live together. Good houses do not stand on bad foundations.
Our legal system suffers from a pair of diseases, two seemingly opposite blindnesses. It thinks too well of its regulatory capacity, and too poorly of its capacity for moral leadership. It takes too exalted a view of the ability of smart people to order the world to their liking, and too dim a view of the extent and importance of sin.
By now, it should be clear enough that these are not opposite blindnesses at all. We are by nature proud––no group more so than smart people in positions of power (not a bad description of most judges), who are encouraged to think that they can reorder the world and make everything just so. We are also endlessly self-justifying, gifted at rationalizing and eager to find ways to do what we want even when the law written on our hearts tells us we are doing wrong. We need norms, standards external to ourselves to which we can hold ourselves, and to which we can hold each other. Law cannot create those standards out of whole cloth––that would be the worst sort of pride. But it does not need to. It need only reinforce the ones that are already there, giving a nudge to the rest of us to work them out in our daily living.
Our law needs a hefty dose of both humility, the ability to see one’s limits, and moral courage, the ability to stand up for what is right. The secular world tends to see these virtues as opposites, but in truth they travel together. The legal system today––I have been discussing the courts, but the point applies to the whole system, not just the judiciary––suffers from a parallel pair of vices, a pair that likewise travel together. The system is too proud, too sure of its own ability. And when the issue is standing up for what’s right, the law takes a powder; it turns timid. No wonder it does not attract much respect. Braggarts and cowards rarely do.
William J. Stuntz, currently a Visiting Professor at Yale Law School, is Class of 1962 Professor and Horace W. Goldsmith Research Professor, University of Virginia School of Law.
You have a decision to make: double or nothing.
For this week only, a generous supporter has offered to fully match all new and increased donations to First Things up to $60,000.
In other words, your gift of $50 unlocks $100 for First Things, your gift of $100 unlocks $200, and so on, up to a total of $120,000. But if you don’t give, nothing.
So what will it be, dear reader: double, or nothing?
Make your year-end gift go twice as far for First Things by giving now.