Support First Things by turning your adblocker off or by making a  donation. Thanks!

One of the many claims of today’s polling industry is that it is the source of credible evidence about religion. It tells us week by week how many Americans regularly attend religious services, whether religion’s strength in our lives is holding its own or declining, and how often we pray. We learn how various denominations are faring, whether some parts of the country are more religious than others, and which kinds of religious beliefs are most appealing, and to whom. So pervasive has this information become, in fact, that for many people it defines what American religion is.

But while polling has climbed in visibility and authority, drawing headlines and shaping the very public opinion it purports to reveal, a contrary trend has started. Public confidence in the polling industry has dropped dramatically over the past decade. People have grown tired of its ceaseless rollouts, especially during election season. Fewer and fewer of us answer the phone when pollsters call. Response rates have fallen so low that it is impossible to know what exactly polls represent. Even in predicting elections, polls have been missing the mark so badly that poll watchers voice growing suspicion about erroneous methods and potential biases.

We should question, too, the religious findings pollsters have announced so confidently. That cannot be done, however, in the same way that political polls are questioned. Because elections actually happen, those polls can be criticized in terms of whether they made correct predictions. Different questions and new ways of weighting the data can be used next time to make better predictions. Religion does not provide the same benchmarks for ­recalibration.

Religion poses a more fundamental question anyway. Is polling even a good way to think about it? To be sure, religious organizations collected statistics long before polling was invented. Denominations kept track of membership rates and baptisms and conversions. Bible societies canvassed communities to see how many families owned Bibles. Never, however, was religion viewed as an “opinion” that could be tapped with simple “yes” or “no” questions. But that is how religious pollsters have proceeded from the very beginning.

The first nationally representative poll about religion occurred in 1935 when George Gallup began coordinating public opinion polls from rented third-floor space across the street from Princeton University. Gallup was interested in differences between Catholics and Protestants on political preferences. Within a few years he added questions about church attendance and the Bible. By 1944 his weekly newspaper columns included updates about how many Americans believed in God (most did) and the proportion of all Americans who had attended worship services in the past week (about 40 percent).

Gallup did his best to persuade the public that polls were meaningful. A devout Episcopalian, he rarely spoke about his faith in public but was keenly interested in the public’s responses to his questions about religion. Clergy could benefit by heeding the polls’ results, he believed. Compared to polls on other topics, though, the ones dealing with religion ­attracted relatively little attention. Religion was popularly understood to be local, embedded in congregations, not an aspect of national opinion.

After World War II, commercial polling and academic surveys provided more detailed descriptions of the changing contours of American religion. In 1952, the Ben Gaffin and Associates polling firm conducted a major national study of religious beliefs and practices for Catholic Digest. Harvard-trained Jesuit social scientist Joseph H. Fichter simultaneously produced the first of several projected volumes of research on the beliefs and practices of laypeople and clergy in the New Orleans diocese. Will Herberg’s Protestant, Catholic, Jew (1955) drew extensively from Gallup polls. To Cold War America, it was reassuring to think that America was a God-fearing nation.

Whatever the fortunes of belief in God, faith in polls was on the rise. “Modern science has been striving to discover as much as possible about what makes people behave the way they do,” a lengthy 1952 Los Angeles Times article observed. “Leading psychologists and sociologists have realized that to do this they must discover what people believe. And to this end, scientists have pooled their efforts to determine the religious attitudes and beliefs of men and women of every age and walk of life.” Nine out of ten Americans definitely believed in the existence of God, polls showed, while only one in a hundred could be classified as an atheist. Belief in God was higher in the U.S. than anywhere in Western Europe. The God Americans believed in and prayed to, moreover, was a personal God. And, while many Americans did not attend religious services regularly, those who did tended to be better educated and had the highest earning capacity. Here at least was a “concrete picture of what America believes.”

Others were less eager to embrace pollsters’ depiction of America’s beliefs. Theologians doubted that something as complex and deeply personal as religion could be tapped in polls. Editorials in religious magazines questioned the methods, accuracy, and value of standard surveys. In 1952, Christian Century described the results from ­Gallup’s “omniscient tribe” as “less than an infallible guide to the mind of the public,” asking, as it did, for snap judgments with little substance behind them.

Polls on other topics prompted critical scrutiny as well. Legislators called Gallup to Washington in 1944 to answer criticisms about the role of polls in possibly influencing that year’s presidential election. Four years later, Gallup’s erroneous prediction of Dewey’s win over Truman evoked further concerns. Academic journals debated the meanings of “public opinion” and considered how polls were shaping as well as measuring attitudes. Polling was scientific only to the extent that it involved random sampling. It paid little attention to such hallmarks of science as transparency, replication, peer review, the use of mixed methods, hypothesis testing, and theory.

As polling gradually gained popularity, it became increasingly lucrative, prompting significant expansion of a for-profit commercial industry as well as interest at universities.

By the early 1950s, polls that occasionally ­included a few questions about religion were being conducted by Louis Harris, Elmo Roper, and ­Archibald ­Crossley, among others, and by the University of Chicago’s National Opinion Research Center and the University of Michigan’s Institute for Social Research. Steadily, coverage expanded as leading firms employed hundreds of full-time staff to conduct in-person interviews in selected locations across the nation. Then in the 1990s they shifted to cheaper telephone interviews based on random sampling. Gradually it became possible to know how religion was faring from week to week instead of only from year to year.

Polling is now a billion-dollar-a-year industry. Currently there are more than 1,200 polling firms in the United States. During the 2012 presidential election, these firms conducted more than 37,000 polls. In total, these polls involved more than three billion phone calls. A typical phone in a typical household was machine-dialed twenty to thirty times. And a majority of the calls were made at dinnertime when pollsters hoped a few of those called would answer.

When Jimmy Carter won his party’s nomination for the 1976 presidential election, journalists scrambled to understand what it could possibly mean to be a born-again Evangelical Christian. They wondered if Carter might succeed in mobilizing a large constituency of like-minded believers. The best answer came from George Gallup Jr., who joined the family firm shortly after graduating as a religion major from Princeton University in 1953. The younger Gallup, like his father, was an Episcopalian, but he thought of himself as a born-again Christian. He had already produced reports on various aspects of religion for several years. Previous estimates of the number of Evangelicals that came from tallying membership figures for Southern Baptists, Missouri Synod Lutherans, and various Pentecostal denominations totaled 20 to 25 million and suggested that Evangelicals were divided along denominational and theological lines. But Gallup put the number closer to 50 million and described Evangelicals as a homogeneous category. The number came from asking pollees directly if they were born-again Evangelicals. Evangelicalism, he told reporters at a conference in Minneapolis six weeks before the election, was the “‘hot’ movement” in American Christianity. And clearly it was if Evangelicals (­so-defined by Gallup) shaped the results on Election Day.

Religious leaders’ earlier skepticism toward polls was now notably absent. Instead of questioning the validity of polls, critics called for more polling. And Gallup responded. With Catholic sister Dr. Miriam Murphy, he founded the Princeton Religion Research Center in 1977 and began publishing books, reports, and a monthly newsletter about religion called Emerging Trends. Over the next five years, Gallup’s center conducted major studies on Catholic teachings about divorce and birth control, views of the Latin Mass and miracles, speaking in tongues, alienation from religion, why people quit going to church, and religious television. Funding came from the Catholic Press Association, the National Council of Churches, McCall’s magazine, and the Southern Baptist Convention.

In 1984, Los Angeles Times religion correspondent John Dart declared—with strong misgivings—that the most listened-to figure in American religion was no longer the pope or Billy Graham or any leading theologian or religious leader, but pollster George Gallup Jr.

Later that same year, a twenty-nine-year-old graduate of Dallas Baptist University named George Barna founded the Barna Research Group to conduct polls and to provide marketing reports on a range of topics including religion. Barna was raised Catholic, but like Gallup was currently a born-again Evangelical Protestant.

Barna focused on topics of practical concern to church leaders. A prolific author with an ability to capture the practical significance of poll results, he wrote books with titles such as Vital Signs, Marketing the Church, and The Frog in the Kettle that sold widely to clergy and lay leaders. Like Gallup, Barna also published newsletters and produced annual reports about American religion. To critics, Barna’s methods frequently produced questionable results, but to his fans he was trustworthy because he was known as “the Christian pollster.”

With Jerry Falwell and Pat ­Robertson mobilizing conservative Protestants and with candidates for public office courting religious leaders, pollsters increasingly became pundits offering interpretations of religion’s public role. By 1988, Gallup was making guest appearances on cable religious television programs and at national ­religious broadcasting meetings. Pollsters Louis ­Harris and Daniel Yankelovich were regular commentators on evening news programs covering politics and religion. CBS teamed up with the New York Times to conduct polls, ABC formalized an arrangement with Harris Surveys, and CNN started conducting its own polls.

Television’s involvement increased the frequency with which poll results were included in the regular news cycle. But it also shifted how the results were discussed. Sound bites replaced the lengthier interpretations typical of print journalism. That was especially significant for polls about religion. Instead of being included as one of many sources in articles by distinguished religion correspondents, poll results were now presented as stand-alone facts.

In 1996, the Pew Research Center for the People and the Press was established under the direction of former Gallup president Andrew ­Kohut and with funding from the Pew Charitable Trusts in Philadelphia. The trusts’ interest in religion stemmed from its founder, J. Howard Pew, whose life as an entrepreneur and president of Sun Oil had included extensive involvement in theologically conservative activities within and beyond the Presbyterian Church. By the late 1980s, the foundation supported an expanding array of projects focusing on Evangelicalism, including efforts to promote Evangelical scholarship and research about Evangelicalism, which it funded through grants to religious organizations and universities.

In 2000, the trusts underwent a restructuring that included the formation of the Pew Forum on Religion and Public Life in Washington, D.C. The combination of its expertise in polling and its funding of forums in the nation’s capital transformed the Pew Research Center into one of the leading sources besides Gallup and Barna of polling information about religion. Mentions of Pew polling about religion in the nation’s newspapers and periodicals increased from approximately 1,800 mentions in the 1990s to more than 11,000 between 2005 and 2012.

Since its inception less than a century ago, polling about religion, then, has in many ways been a remarkable success. It has provided a wealth of information about patterns and trends in beliefs and affiliations. Over the years, questions about religion have been included in polls more often and by a larger number of polling organizations. Through organizations such as Pew, funding has increased even as the cost of conducting polls has declined. Newspapers, television, and the Internet disseminate the results. And prominent pollsters speak authoritatively about the shape and trajectory of American religion.

But in other ways polling’s success has come at a price. Polling has taught us to think about religion in certain ways that happen to be convenient for conducting polls. The questions tap a few aspects of belief and behavior that can be tracked as trends and rarely provide opportunities to hear what people actually think. Polling’s credibility depends on a narrow definition of science and an equally limited understanding of the errors to which its results are subject. Its legitimacy hinges mostly on predicting elections and making news. With few exceptions, polling about religion is an industry based on the use of the single method of asking questions in a survey, not on multiple methods or extensive knowledge about religion itself. Above all, it depends on a public that is willing to believe that polls are sufficiently valuable to spend the time it takes to answer questions when pollsters call.

These inherent features of polling are now making it harder to trust what pollsters tell us about religion—or if we do trust it, to know what it means. Despite the fact that polling includes more questions about religion, polling’s focus is still on politics, and those political considerations influence what questions are asked about religion. As consumers, we are led to believe that Pew and Gallup and a few others know what is happening in American religion down to the tenth of a percentage, and we should therefore believe what they tell us about major trends. But polling’s dependence on sample surveys that have become increasingly problematic leaves a growing share of the public skeptical.

When pollsters asked respondents in the 1980s what they thought of polls, nearly 80 percent ­expressed confidence that polling was beneficial, that pollsters’ methods were basically sound, and that people responding to polls were truthful. But over the next two decades those proportions plummeted. In a 2006 study only 34 percent thought pollsters could be trusted (about the same as the result for Congress).

As public confidence in polls has tanked, so has the public’s willingness to participate in polls. Response rates have declined precipitously. In the 1980s, the typical response rate was in the 65- to 75-percent range—the range to which serious academic and government surveys aspire and frequently achieve. By the late 1990s, response rates had fallen to 30 to 35 percent. Currently the typical response rate is 9 or 10 percent, and rates rarely exceed 15 percent. In other words, upwards of 90 percent of the people who should have been included in a poll for it to be nationally representative are missing. They were either unreachable or refused to participate.

The decline in response rates is particularly troubling. Unsuspecting consumers of poll results are unlikely to wade through obscure methodological appendices to learn if the response rate was respectable or not. To the extent that possible errors were considered, they would likely be associated with the plus or minus 3.2 percent margin of error from probability sampling, not with the more considerable but undisclosed error from embarrassingly low response rates. As prominent pollster Burns Roper cautioned at a 1996 meeting of fellow pollsters, “It’s not true [that] you can tell within 3.2 percent what the American public thinks. Actually you can tell within 3.2 percent what the cooperating American public says and that’s different [from] the public [that] doesn’t cooperate. You don’t know much about them.”

The polling community has tried to reassure an increasingly skeptical public that polling results can be trusted. Comparing typical polls to those with high response rates, though, suggests that all is not well. People who respond in low-response polls are more likely than non-respondents to volunteer for other things as well. For example, 55 percent of respondents in a Pew poll had volunteered for an organization in the past year, compared to only 27 percent in a higher-quality government survey. And volunteers differ dramatically from non-volunteers on religion questions: They are significantly more likely to attend religious services, to be Evangelical Protestants, and to regard themselves as personally religious, among other things. “Just what other attributes might differentiate survey respondents from the average American?” one columnist asked. “On most issues that Pew studies, we may never know, because there aren’t government benchmarks.”

An absence of benchmarks notwithstanding, the polling industry does little to reassure readers that its own comparisons are meaningful. Headlines about the “collapse” of American Christianity may be arresting, but when one poll achieved a 25 percent response rate and the next one a 10 percent response rate, it would be useful to know how that change may have affected the results.

And if declining response rates leave unanswered questions, what we do know from polls—or think we know—needs to be regarded with much greater caution than is typically the case in journalistic coverage. Consider something as simple and seemingly straightforward as attendance at religious services. Polls tell us that approximately 90 million American adults attend religious services every week—far more than participate in anything else—with some of that number attending several times a week. Others attend several times a month, which means that in any given week, upwards of 100 million people are at their various places of worship. And if churchgoing is fairly common among Americans today, polls suggest that it was fifty years ago as well. In fact, Gallup began asking about church attendance in 1939 and, according to the Gallup Poll website, “the average percentage who said ‘yes’ [when asked if they had attended in the past seven days] in 1939 was 41 percent, virtually the same as recorded most recently.” Hence, as the title of Gallup president Frank Newport’s book God Is Alive and Well: The Future of Religion in America suggests, religion is a pervasive and relatively stable feature of American life.

But judging from academic surveys that still have high response rates and ask good questions, the polling estimate of 90 to 100 million weekly churchgoers is off by about 30 million. If political polling were off by that much, it would be scandalous.

When asked about erroneous estimates of religious attendance, pollsters blame respondents for apparently wanting to appear more religious than they are. But if pollsters can ask better questions about voting, why can’t they do the same about religious attendance? Studies show that one or two additional questions could mostly remedy the issue.

Pollsters’ penchant for snappy headlines adds to the problem. Polling’s stock in trade is identifying trends, which makes it interesting to think that the rate of churchgoing in 1939 was about the same as it is today. What consumers of this information don’t know is that Gallup polls in those early years systematically undercounted women and African Americans—and that was when they were systematic at all. As one interviewer in those years confessed, “I personally do the greater part of my work in the parks and along the streets, the shoppers’ lounges, small shops and stores, suburban railroad stations, and busy corners.”

When polling estimates are off by tens of millions for the nation writ large, the fine-grained results pollsters report become harder to believe as well. Weighting the responses from different regions of the country and among different age groups may help in predicting elections, but it can create additional problems if weights do not adequately reflect variations in religious behavior. Regional comparisons, for example, usually suggest that Southerners are more religiously active than Midwesterners, but some polls lead to the conclusion that those differences have disappeared. And increasing the number of respondents does not solve the problem unless higher response rates are also achieved. Reporting that the population of one state is more religiously active than the population of another state, for example, is meaningless unless the bias from low response rates in the two states is known to be equal.

Just as regional comparisons are sometimes misleading, so are comparisons among racial categories. Polling is traditionally biased by white norming, which means that questions are geared toward the majority of white respondents and based on assumptions about that population to a greater extent than on solid information about respondents who do not fit that pattern. In many polls, African Americans appear to be no more or only slightly more religiously active than white Americans. A nationally representative academic study that focused on African Americans, though, found they spent 50 percent more time on average attending religious services than white Americans and were significantly more likely to express religious commitment on other measures such as praying for and helping fellow members of their congregation.

Even the polling result that has received the greatest attention in recent months needs to be considered critically. This is the headline about Americans becoming nonreligious—a story based on the rising proportion (especially among younger respondents) of Americans who have no or no particular religious affiliation. Careful reporting acknowledges that many of these “religious nones” still believe in God and occasionally attend religious services. By focusing on trends, though, most of the reporting misses what may be an even more important development: The responses particular individuals give when pollsters call do not hold steady apart from minor incremental changes associated with aging or marriage or other predictable influences. They fluctuate wildly. Between a third and a half of pollees give different responses a year later even to relatively ­straightforward questions about religious ­preferences and attendance. In short, religion may be changing in ways that require rethinking what exactly it is that polling captures.

As it presently exists, then, polling about religion is troubling not because it is always wrong, but because it has become difficult for anyone to know when the results are correct and when they are not. News reports present poll results as if they are accurate and factual representations of what the American public believes and does. A more honest report would explain that 90 percent of those who should have been included in the poll were not included.

The National Research Council recently completed a major study of declining response rates in academic and government surveys. The study demonstrated how difficult it has become to conduct high-quality surveys even when far more time and money are expended than in the typical commercial poll. It raised significant doubts that in a few years survey results can be trusted at all—doubts that have been deepened by pollsters’ failures to achieve response rates of even 20 percent through more extensive phone calls and cash incentives.

The American Association for Public Opinion Research organized a transparency initiative in 2010 to encourage better disclosure about methods and biases, but four years later an independent study of 140 polls showed that only three disclosed response rates. It has been twenty years since serious questions were raised about church attendance figures. From time to time, news media include disclaimers. But for the most part, the acid test for polling firms is getting election predictions right, which means adding weights, doing simulations, and carefully figuring out who will vote and who will not. Religion does not have those benchmarks.

Polling about religion purports to tell us the facts by conducting scientifically reputable studies. Even with low response rates and complicated weighting schemes, the studies sometimes generate credible broad-brushed descriptions of general patterns. But polling should not be confused with painstaking research that takes months and years to complete and that relies on historical, ethnographic, and theoretical knowledge as well as numbers for its interpretation.

From the beginning, polling was in the business to make headlines, and that is pretty much what it continues to do today. The seeming accuracy of results to the tenth of a percentage point doesn’t stand up to basic methodological scrutiny, nor does the content of the questions themselves. If the devil is in the details, the details about religion polls are devilishly difficult to trust.

Robert Wuthnow is the Gerhard R. Andlinger Professor of Sociology at Princeton University and the author of the forthcoming Inventing American Religion: Polls, Surveys, and the Tenuous Quest for a Nation’s Faith.

Dear Reader,

You have a decision to make: double or nothing.

For this week only, a generous supporter has offered to fully match all new and increased donations to First Things up to $60,000.

In other words, your gift of $50 unlocks $100 for First Things, your gift of $100 unlocks $200, and so on, up to a total of $120,000. But if you don’t give, nothing.

So what will it be, dear reader: double, or nothing?

Make your year-end gift go twice as far for First Things by giving now.
GIVE NOW