Under an ideal scenario, only the best science, pure and undefiled, would flow directly into policy as it is made to protect human health and the environment. But that wish isn't realistic. The best assurance of good public policy seems to lie not only in scientific knowledge per se but in open debate, caution, and a regulatory system capable of self-correction.
Environmental policy is always based on science—up to a point. But defining that point is often a matter of fierce dispute and political combat. Then the quality of the science involved becomes an issue.
Decisions are easiest when threats and benefits are immediately visible to the naked eye. No one questioned, for instance, the proposition that burning soft coal in fireplaces and furnaces meant smoky skies over St. Louis. When people got sufficiently tired of the smoke, as they finally did in 1937, this source of home heating was outlawed with no argument over causation. But much of the modern environmental protection movement has been a response to menaces that are invisible, indirect, and detectable only through advanced technology. The effect has been to draw subtle and complex scientific issues into the arenas of politics.
The debates burn hottest where scientific uncertainty is the greatest and economic stakes are the highest. Scientific uncertainty comes in many forms.
About-Face on Thresholds
When science changes, environmental regulation has great difficulty adapting. One dramatic example is the issue of carcinogens' thresholds—whether there are doses below which carcinogens have no adverse effect on health. On that one, the consensus among scientists has reversed twice in less than fifty years.
Until the 1950s, it was a settled principle of toxicology that every poison had a threshold below which the dose was too slight to do harm. But with rising anxiety about the environmental causes of cancer, especially in the context of the debates about nuclear radiation and weapons testing, it began to seem more prudent to assume that carcinogens generally had no thresholds. One result was the famous Delaney Clause that Congress wrote into the 1958 Food, Drug, and Cosmetics Act.
The Delaney Clause banned all carcinogens from any processed food. At the time Congress, like the experts advising it, was under the impression that carcinogens were few and readily identifiable. But over time research found more and more substances that, if fed to rats in sufficiently massive amounts, could cause cancer. Some were naturally present in common foods—including orange juice. At the same time the increasing sophistication of measuring techniques identified traces of widely used pesticides and fungicides in many foods.
The regulatory system generally responded to these unwelcome findings by ignoring them. But at the same time the science was changing. Improved understanding of the processes by which cancers originate and develop made it seem increasingly likely that thresholds exist after all. The regulators themselves became convinced of that, although the Delaney Clause remained the law. The Food and Drug Administration quietly whittled away at the clause until the courts told them to go no farther.
There's a high cost to society when government must enforce laws that make no sense to the people charged with enforcing them. It engenders cynicism among the regulators, and among the public it erodes confidence in both the law and its enforcement. But while Congress increasingly understood that the law was unenforceable, it refused to consider any reform that might be attacked as lowering the standard of health protection.
Lawsuits Force the Issue
A lesson for science policy lies in the way this paralysis was ended. It wasn't the advance of science that did it, although the science was certainly advancing. Instead, as often happens in environmental affairs, the issue was forced by litigation—in this case, litigation brought by people who wanted the Delaney Clause enforced more literally. In 1992 a federal appellate court decision raised the prospect that the Environmental Protection Agency would be required to ban many widely used pesticides, with drastic implications for farmers' crops and retail food prices. That got the attention of Congress, and last year it replaced Delaney's flat ban with a more realistic standard of "reasonable certainty" of no harm. According to its authors, the phrase was intended to mean a lifetime risk of cancer of no more than one in a million. With this change, the law is now back in conformity with scientific opinion and the regulators' actual practice.
Opinion Masked As Science
If it is possible to draw up a list of the circumstances that generate strife over the application of science to policy, along with changing science, disputes among scientists must also be near the top. To many laymen, certainty and precision is the essence of science: as they understand it, a scientific question can have only one right answer. But especially in matters of public health, it is often essential to make policy decisions long before the science is entirely clear. When people's lives and welfare are at stake, it is not possible to wait until every technical doubt has been resolved.
The situation is frequently aggravated by scientists who underestimate the uncertainties in their own work, leading them to blur the line between science and policy. Endless examples have turned up in the congressional hearings this year on the EPA's proposals to revise the air quality standards for ozone and particulate matter. The EPA's Clean Air Science Advisory Committee (CASAC) set up a special panel of experts on ozone, and the panel came to general agreement that, within the range of standards under discussion, there was no "bright line" to distinguish any of them as being "significantly more protective of public health" than the others. Setting the standard, they said, was purely a policy choice. But the law specifically authorizes CASAC panels to offer policy advice, and more than half of the panel went on to offer EPA their various and conflicting personal opinions as to where the standard should be set. CASAC deliberately organized to represent a wide range of views and interests.
The policymakers, most of them trained as lawyers, seized whichever of these personal opinions agreed with their own and cited them as the voice of science itself. In congressional hearing after hearing, EPA's Administrator, Carol Browner, defended her proposed standards as merely reflecting "the science." Her adversaries then quoted back to her the opinions of scientists who disagreed, some of them members of CASAC and others officials of the Clinton administration.
A more productive way to approach policy choices is to acknowledge uncertainty and take it explicitly into account. Do you go on a picnic if the weather report forecasts a 60 percent chance of rain? Do you commit society to a complex new air quality regulation if there's a 40 percent chance that it will not provide health benefits as intended? Attempting to quantify risk is an important step in making policy decisions. Unfortunately, it violates the current style of politics, in which it is safer to minimize responsibility and discretion by suggesting that decisions are determined solely by the science.
But which science? Toxicology looks for the mechanisms of damage to health at the molecular level, in terms that can be demonstrated in the laboratory, and tends to dismiss anything less specific as mere speculation. Epidemiology, on the other hand, sees reality in the statistical associations between the presence of a pollutant and the evidence of damage. As Mark Powell, a fellow in RFF's Center for Risk Management, has pointed out in his discussion paper on EPA's use of science in setting ozone policy, the tension within the agency between the toxicologists and the epidemiologists is as old as EPA itself. On clean air, CASAC is similarly divided.
In the current round of the debate over clean air rules, the policymakers who support tighter standards cite the epidemiologists. Those who resist tighter standards cite the toxicologists. At present the differences between the two specialties' positions on particulate matter is substantial, and there is no one view that represents settled and accepted scientific truth.
Science As Proxy for Other Issues
In the vehement debates over science, scientific uncertainty often becomes the proxy for other issues—in the case of the Clean Air Act, for the forbidden subject of economic costs. The act prohibits EPA from taking costs into account in setting standards. Opponents of proposed regulations, unable to pursue their argument that the costs will outweigh any prospective benefits to health, go after the scientific basis of the regulations instead.
Confusion also arises when science asks the wrong question—sometimes because the law requires it. Here again the Clean Air Act provides examples. To take a prominent one, the act wants science to tell the regulators what effects each of six common pollutants has on human health. Since the pollutants are regulated separately, the health effects have to be studied separately. Scientists have been trying to tell the regulators for some years that it would be far more useful to investigate these pollutants mixed together, in the "soup" that people actually breathe, because the presence of one compound can affect the impact of another. But Congress has never responded to that advice because the concept of mixtures doesn't fit easily into the existing statutory framework for regulation. When environmental reality collides with statutory tradition, it's not always the statute that gives way.
Sometimes the Wrong Battle
Science, or what seems to be science, can sometimes be flatly wrong. The process of scientific inquiry is self-correcting over time. That is its greatest strength. But policy doesn't always wait for the corrections.
The Superfund program originated, notoriously, in response to mistaken and exaggerated scientific judgment. The Love Canal, in Niagara Falls, NY, had been well known locally as a toxic chemical dump that was leaking insecticide into Lake Erie. But it suddenly became a national news story and a symbol of a new range of hidden environmental dangers, when in the summer of 1978 the state's health commissioner declared it a threat to the health of people living there. It was an election year in New York, and suddenly politicians at all levels, including President Carter, were competing to show concern and protect the residents. The following year a scientific consultant to the local homeowners association reported findings that indicated a wide range of threats to health. Then another consultant engaged by EPA reported evidence of high rates of chromosome damage among residents. Those claims established the atmosphere in which Congress began to draft the Superfund legislation.
Subsequently, review panels within EPA severely criticized the contractor's chromosome report, and a special committee of scientists set up by the governor of New York dismissed all of the health findings as inconclusive. But by the time that happened, the Superfund bill was approaching final passage. It is not entirely coincidental that, of all the major federal environmental laws, Superfund has produced the fewest benefits to health and welfare in relation to the costs it has incurred and the litigation it has generated.
It would be pleasant to think that some mechanism might be invented to allow the best science to flow, pure and undefiled, directly into policy. But that's hardly realistic, amidst the turbulence of rapidly developing science and especially in a field that, like environmental and health protection, has emerged as one of the leading battlegrounds of national politics The best assurance of good public policy seems to lie in open debate, caution, and a regulatory system capable of self-correction.
Research Needs Funding
One point on which improvement is both possible and badly needed is the funding of scientific research relevant to regulatory decisions. Private and public spending in this country to meet the federal requirements for pollution control and abatement is in the range of $140 billion a year. Congress gives EPA less than half of one percent as much to spend on all its scientific and technological work for all purposes, a sadly disproportionate effort to ensure that environmental rules have the best possible scientific base.
It's not only the general pressure to cut the budget that inhibits adequate spending on science to support environmental regulation. Concerns about global warming have led to substantial outlays of federal science money on other purposes, and on other agencies than EPA. Currently, the EPA science budget is only about 10 percent of total federal spending on environmental scientific research and development.
The purpose of balancing the budget is to enhance the economy's efficiency and promote future growth. But budget cuts won't help the economy if they lead to the waste of resources on misguided policy.
James D. Wilson is a senior fellow and resident consultant in RFF's Center for Risk Management. J.W. Anderson is a former member of the Washington Post's editorial page staff and RFF's current journalist in residence.
This article benefited from the comments of Mark R. Powell, a fellow in the Center for Risk Management, who is completing a book on the EPA's use of science. On that subject he has published eight case studies as RFF discussion papers. See page 17 to order copies.