In innumerable newspaper and television stories shortly after New Year’s, the reassuring news was passed to the American people: Science––in the form of the New England Journal of Medicine––had spoken on the question of abortion and breast cancer. A supposedly definitive study from Denmark, prepared by Dr. Mads Melbye and his colleagues in Copenhagen, had concluded, “Induced abortions have no overall effect on the risk of breast cancer.”
There was perhaps something disingenuous in the reassurance, for the prior evidence of abortion’s contribution to breast cancer was news that had been assiduously suppressed by much of the mainstream medical and popular press. But now, with the publication of the Danish study, all doubts were put to rest. The debate about abortion is now rightly restricted to “abortion itself––a debate that is ethical and political in its essence,” declared Dr. Patricia Hartge of the National Cancer Institute in an editorial that accompanied the publication of the study in the January New England Journal of Medicine. Women need no longer “worry about the risk of breast cancer when facing the difficult decision of whether to terminate a pregnancy.”
It was unusual for such advice––and such canonization of a study as “definitive”––to come only three months after a comprehensive review of the literature, published in the British Medical Association’s Journal of Epidemiology and Community Health, reached essentially the opposite conclusion. Written by myself and colleagues at Penn State’s College of Medicine in Hershey, the review compiled the data from all twenty-three available studies, dating back to 1957. Even with the most conservative statistical averaging method, we found a significant, 30 percent increase in the risk of breast cancer attributable to a woman’s having had one or more induced abortions.
Having arrived at this finding about what is in the vast majority of cases an elective surgical procedure, we urged the opposite of Dr. Hartge’s reassurance that women need not worry––pointing out the “present need for those in clinical practice to inform their patients fully about what is already known.” There now stands an impressive total of thirty studies worldwide, twenty-four of which show increased breast cancer risk among women who have chosen abortion, seventeen of which are statistically significant on their own. Such an overwhelming preponderance of the evidence is usually more than enough to convict any risk factor in the eyes of the medical establishment, particularly when the connection makes biological sense, with the marked overexposure to estrogen (the female hormone implicated in most breast cancer risk factors) experienced by women who elect to terminate a normal pregnancy.
But the problem is, of course, that abortion is a medical procedure whose political and social significance sets it outside normal public health concerns––even, or perhaps especially, for the public health professionals who overwhelmingly support legalized abortion and who have proved willing to set aside their medical scruples whenever legalized abortion appears threatened. The U.S. Department of Health and Human Services has been conspicuously active whenever the link between abortion and breast cancer has received any notice. In November 1994, when Dr. Janet Daling of the Fred Hutchinson Cancer Research Center in Seattle reported a significant, 50 percent increased risk of breast cancer with induced abortion, the Journal of the National Cancer Institute printed the study with an accompanying editorial that impugned Daling’s findings. And from the Washington Post to Elle magazine, the popular press reported explanations and denials from government experts. In the February 1995 issue of Elle, for example, Assistant Surgeon General Susan Blumenthal criticized Daling for failing to take into account the effect of birth control pills, a patent falsehood that Dr. Blumenthal’s office has declined to retract.
In January 1996, convinced by Daling’s study and other work, a pro-life group rented advertising space in rapid––transit stations in Washington, Philadelphia, and other major cities, putting up posters warning about the increased risk of breast cancer that accompanies abortion. Within days after the signs went up, the order came from then––Assistant Secretary of Health, Dr. Philip Lee, to remove them. The case is presently on appeal in the federal courts.
More recently, in December 1996, another study in the Journal of the National Cancer Institute described a 90 percent increase in the risk of breast cancer in Dutch women who had had abortions. But this time even the authors impugned their study, calling their own findings flawed, an artifact likely due to something called “reporting bias.” In an accompanying article, the National Cancer Institute editorialists––taking the opportunity to attack the review my colleagues and I had published two months earlier––developed this theme, blaming any significant link between abortion and breast cancer on “a systematic bias” that “may affect all (or nearly all) studies.”
“Reporting bias” is the sort of statistical artifact that must always be considered a possibility in any epidemiological study that depends on subjects’ recollections. In any study––especially of a subject as sensitive as abortion––inaccuracies may exist in the reports of the women interviewed. If, for example, the women with breast cancer are more honest in reporting abortions than the women without breast cancer, then there will appear, as a result of this reporting bias, an apparent but false increase in the risk of breast cancer. Researchers must be vigilant about reporting bias, not only because the exposure in question is particularly sensitive, but also because such risk elevations as 30, 50, or even 90 percent are relatively small in epidemiological terms––and even a little bias can go a long way in producing a false result.
The trouble is that, despite the certainty at the National Cancer Institute that this bias exists in breast cancer research, the only credible evidence ever produced is against it. Thus, for example, Dr. Daling and her colleagues tested for its presence in their 1994 study by looking at cervical cancer incidence (which is known not to be associated with abortion) and found no apparent risk elevation and no evidence of reporting bias. In defense of the reliability of the same 50 percent risk elevation for breast cancer Daling had found, a 1995 study on women in Greece noted the “widespread social acceptance” of induced abortion in Greece and reviewed other studies on Greek women to support the conclusion “that healthy women in Greece report reliably their history of induced abortion.” In 1989, a study of New York women based on prospective computer-registry data (as opposed to the usual retrospective data based on subject recall, and therefore automatically free of the possibility of reporting bias) found a significant 90 percent breast cancer risk increase with induced abortion.
The notion of abortion reporting bias in connection with breast cancer has its source in a 1989 study and a 1991 paper by a Swedish group that compared two studies on the same population of Swedish women, one based on patient recall and the other on prospective computerized records. Seven Swedish breast cancer patients reported having had abortions for which there was no computer record, and were thus declared to have “overreported” their abortions (i.e., made them up). Relying upon this apparent phenomenon, the authors of the 1996 study of Dutch women ascribed to reporting bias the risk increase among women from a more religious, conservative region of Holland, when compared to women from a more secular, liberal region. One ought to be suspicious of the inexplicable nature of this claimed bias, which on any straightforward reading ought to run the other way––religious women claiming fewer abortions than they really had, not more. But even with this problem set aside, the Dutch study’s dismissal of its own findings of a 90 percent risk increase turns out to be false to its own method, which found no evidence of bias between case and control subjects who had been matched by region. In order to create the claimed bias, the Dutch researchers were compelled to mismatch results with regions until they found their desired evidence of difference in reporting.
The lack of credible evidence for reporting bias notwithstanding, conclusive proof about the abortion––breast cancer link––as nearly all researchers agree––can come only from studies using prospective data like the 1989 New York study that found a 90 percent increased risk of breast cancer attributable to induced abortion. This was the importance of the “definitive” Danish computer-registry study led by Dr. Mads Melbye and published in the January 1997 New England Journal of Medicine. The Melbye study claims to be definitive not only because of the prospective nature of the data, but also because of its size, encompassing all 1.5 million women born in Denmark between 1935 and 1978, over 280,000 of whom have had legal abortions, and over 10,000 of whom have had breast cancer. The study concludes that “Induced abortions have no overall effect on the risk of breast cancer,” having found an overall risk increase associated with abortion of exactly 0 percent.
The study falls apart, however, upon the close scrutiny made possible by the substantial body of published data concerning the same population of Danish women. Although abortions have been legal in Denmark since 1939, the Melbye study used computerized abortion records beginning only with 1973. The authors understate this weakness of the study, acknowledging only that “we might have obtained an incomplete history of induced abortions for some of the oldest women in the cohort.” But a check of pre-1973 abortions shows that they misclassified some 60,000 women who had abortions as not having had any.
Yet even this egregious misclassification is not the most significant flaw in the study. The generally long latency of breast cancer means that the study largely compared younger women (with more abortions and fewer incidents of breast cancer) to older women (with more incidents of breast cancer and fewer abortions). The authors are aware of this potential source of error. But in correcting for it by adjusting for a “cohort effect,” they made an astonishing blunder. The “cohort effect” is the acknowledged fact that the incidence of breast cancer has been generally rising for most of this century. The problem, however, is that the causes of this rising incidence are unknown, and since the frequency of induced abortion has similarly risen through most of this century, abortion may well be a cause of the cohort effect. And if abortion is indeed a factor in the risk of breast cancer, the cohort adjustment the Melbye study performs necessarily eliminates its effect––making the 0 percent increased risk a virtually guaranteed result.
And there is plenty of evidence that induced abortion is indeed the missing cohort factor. First, Melbye and his colleagues show enough data to compute the unadjusted relative risk, and this calculation shows a 44 percent risk increase (it is extremely disturbing, from a scientific point of view, that this number did not appear in the paper). Second, a 1988 study of part of the same cohort of Danish women found a 191 percent increased risk among childless women (the only women reported on) who had any induced abortions. Third, a close examination of the legal abortion rate in Denmark since 1939 shows a striking parallel with the rates of breast cancer incidence. The abortion rate peaked in 1975 and the average age at which a Danish woman had an abortion is twenty-seven, which means that the greatest number of abortions were performed o n women born around 1948. But the latest age-specific data in Denmark show that the incidence of breast cancer is maximal for women born between 1945 and 1950, and is on the decline for women born more recently. A proper analysis would likely show a significant breast cancer increase in the neighborhood of 100 percent for induced abortion.
Abortion is not a controversial subject in Denmark and Dr. Melbye seems a sincere and competent man. But his study reveals the entrenched bias in favor of the view that abortion is harmless to women, a bias that is decades old. One in every six Danish women has had at least one abortion, which means that complicity in abortion decisions is pervasive in the society. How willing can members of such a society be to acknowledge that they have put themselves and those they love at risk of one of the most dreaded, life-threatening diseases that a woman can get? What hope is there in such a society for scientific integrity to overcome a witting or unwitting wall of denial?
Fortunately, abortion is still a controversial subject in America, but denial from high places of its harmfulness to women is hard to miss––even in the partial funding for the Melbye study provided by the U.S. Department of Defense. If we are to maintain scientific integrity in medical research, we must denounce wherever it appears the manipulation of studies to provide socially desired results. The point of maintaining scientific integrity in medicine, of course, is not just to preserve the abstract notion of truth, but to save the lives of both women and their babies.
Joel Brind is Professor of Biology and Endocrinology at Baruch College of the City University of New York and Editor of the new Abortion-Breast Cancer Quarterly Update.