Since 1988, the lights have been blazing at those laboratories of democracy, the states, as comparative risk practitioners like me have tried to make the abstraction of "comparative risk" a useful tool for democratic institutions. Among the mistakes we've made has been staying too quiet about our successes and near-successes. As a result, the national debate about the methods and values of comparative risk analysis (CRA) has often appeared disconnected from the discipline as it is actually practiced today. To whatever degree early CRA at the U.S. Environmental Protection Agency (EPA) ever was, as some critics hold, undemocratic, contrary to the will of the people, antithetical to pollution prevention, or a mere propaganda tool, the states have made it otherwise.
In truth, the extensive comparative risk projects (see the map on page 8) of states, cities, and tribes have made CRA into many things. It is foremost a tool to compare a wide range of environmental problems and reach some understanding of their relative seriousness. Using that information, agencies, legislatures, and individuals can set better priorities for action and investment. Such a tool takes on increasing importance as states assume more and more responsibility for environmental management.
In short, CRA as a discipline is evolving as different levels of government adapt it to suit their needs. These adaptations generally are making the discipline more democratic, more inclusive, more closely tied to locally defined public values, more honest about its own limitations, and, hence, more likely to be productive. This article describes some of these developments, which were identified in a study I made in 1993 with Kenneth Jones and Christopher Paterson on state comparative risk projects while at the Northeast Center for Comparative Risk (NCCR) of the Vermont Law School.
EPA and the evolution of comparative risk projects
Analysts have been comparing activities on the basis of risks—particularly the risk of premature death—for decades, trying for much of that time to persuade the government to use the comparisons as the basis for setting priorities. However, the state comparative risk projects we studied have a common root in a similar project conducted by EPA in 1986. It was an analysis of the relative risks posed by thirty-one pollution sources over which EPA had jurisdiction, and the results were published in February 1987 as a multivolume report, Unfinished Business: A Comparative Assessment of Environmental Problems. The report concluded that the problems posing the biggest risks to the nation (such as indoor air pollution and radon, global warming, ozone depletion) were generally not EPA budget priorities. Some weren't even explicitly part of EPA's statutory mandates. Unfinished Business is still talked about because it concluded what most subsequent comparative risk projects have concluded: the biggest remaining environmental risks tended to rank low in the public's ranking of risk, as revealed by opinion polls.
Senior EPA managers found Unfinished Business compelling enough to encourage its replication out of town. By 1988, comparative risk projects were under way in three EPA regional offices and in three states: Washington, Colorado, and Pennsylvania. Administrator William Reilly charged the agency's independent panel of experts, the Science Advisory Board (SAB), to peer review Unfinished Business and offer suggestions on how to respond to any issues it might raise. The result in 1990 was a report, Reducing Risk: Setting Priorities and Strategies for Environmental Protection. The SAB acknowledged many of the problems with the comparative risk method and Unfinished Business, but soundly endorsed CRA as a valuable guide for setting priorities. The SAB also encouraged EPA to use the method and concluded that EPA should target its environmental protection efforts based on the opportunities for the greatest risk reduction. Five years later, the National Academy of Public Administration would make a similar recommendation.
Particularly noteworthy to what would become state comparative risk projects was the SAB's emphasis that no amount of science could or should completely replace subjective judgment. Especially because of the subjective nature of ranking various types of health effects, the SAB recommended that lay people be involved in any CRA processes. As will be described below, state CRA practitioners were quite receptive to this advice, and used it to expand the scope of comparative risk work.
Comparative risk projects: processes and priorities
The sponsors of comparative risk projects attempt to answer two fundamental questions: What are the most serious environmental problems here? How can we most effectively address them?
Most state and city officials who initiate a comparative risk project hope that answering these questions will improve their environmental management decisions. They also hope that the process of answering the questions will help them build the political momentum they might need to make changes in policies and priorities. Some officials specifically hope to use the results of the projects as tools to reshape their relationship with EPA; most seem to view the projects primarily as ways to reshape their own agencies and their relationships with their staff and the public. We at NCCR saw the projects as particularly effective ways to bring the public into agency deliberations and decisionmaking.
The comparative risk process is part science and part politics, giving current technical information to both legislators and the interested public to enable better decisionmaking.
Because of their breadth, CRAs are crude tools: they have to make sweeping generalizations about pollution levels and exposures, as well as about how people or ecosystems respond to those exposures. In this respect, the projects are like laws or regulations: as the size of the jurisdiction decreases, the fit improves.
The typical comparative risk project follows some basic steps: define and analyze the risks posed by the environmental problems facing the jurisdiction; rank the risks in order of their severity; select priorities for particular attention; set goals for risk reduction; propose, analyze, and compare strategies to achieve those goals; implement the most promising strategies; and monitor results and adjust policies or budgets accordingly. The comparative risk process, then, is part science and part politics: at its best, it puts up-to-date technical information into the hands of both legislators and the interested public in a way that enables better political or personal decisions.
Comparative risk projects have common elements that roughly parallel the project steps: a problem list, analytical criteria, a ranking of the problem list, and an action plan. The problem list is the set of environmental problems to be analyzed and compared. (Drafters usually pick about two dozen problems—say, from sewage and Superfund sites to global climate change—that can lead people to a new and broader perspective on the environment.) A set of analytical criteria defines what the participants consider important to measure or estimate, such as various types of risks to human health, to ecosystems, or to a population's quality of life. Most projects take at least six months to gather data and characterize the problems.
The ranking process follows, and is used to sort out the data, draw conclusions about the relative severity of the listed problems, and, in some projects, to select priorities for action. Observers may fail to distinguish a risk ranking from a priority ranking, a serious mistake if one took the rankings—the simple lists—as a sufficient guide for budget decisions. The rankings—the lists—are effectively the headlines on a more detailed and useful story: they serve primarily to get people's attention and force some questions: How can it be possible that indoor air pollution is a greater health threat than Superfund sites? What does this mean? How do they know? If it's true, what should I do? These questions return people to the data, to the analysis, and to a level of detail that will be necessary to confront if they—or their state agencies—choose to address the problems seriously and target their actions appropriately.
Thus, the action plans tend to grow from the information of the first phase of the process, either as legislation, recommendations for new programs, or adjustments to old programs and budget priorities. The most effective plans have identified priorities through a process of comparing the risk-reduction potential of a number of alternative proposals.
In a political system that is generally more responsive to the public than to experts, priorities tend to follow the public's understanding of problems. EPA's most expensive programs (such as those addressing hazardous waste facilities, abandoned hazardous waste sites, underground storage tanks, and so forth) tend to address problems about which the public is most deeply concerned. In contrast, active participants in most state and local CRA projects have concluded that these problems pose relatively lower risks than problems receiving less regulatory attention.
As noted, EPA's Science Advisory Board concluded in 1990 that it was critical that lay and expert views about risks be brought together in CRAs. Though largely ignored at the national level, practitioners in the states were more receptive to the advice. Washington, Vermont, and many other states demonstrated by their projects the potential for lay citizens and technical experts to work together productively on comparative risk projects.
The status of comparative risk projects, June 1995.
How the states are using CRA
Why have so many states and communities invested so much effort in the process? Does the CRA process do any good?
The answer is mixed. In state capitals, just as in the nation's, making any fundamental change in policy or approach is enormously difficult in the absence of a crisis. A fairly stable status quo provides precisely what the public has most vehemently asked for: protection from the risks it most fears or abhors. Whether one believes in participatory democracy or not, public policy ultimately flows from the people, and the only way to change public policy will be with the people's blessing. Thus, it should come as no surprise that the most effective comparative risk projects have been the ones that set out specifically to include key representatives of the public in addition to technical experts.
Project participants share a strong conviction that their insights are important and should be used to influence public policy.
The projects have an impact on their participants, whoever they are. The ordeal of working as a group to rank problems forces group members to clarify their own thinking as they search for points of agreement with their colleagues or sharpen points of disagreement. The ranking process exposes weak arguments, poor data, and fuzzy thinking. The process tends to break down preconceptions about the problems. The process also breaks down individuals' prejudices about the other participants. The result: members of ranking committees have discovered that they agreed on far more than they had expected. They have come to share a strong conviction that their insights are important and should be used to influence public policy. In short, the process has frequently built coalitions for change.
State projects have expanded their problem lists beyond EPA-managed problems in order to answer public questions; so too have they expanded the analytical criteria by which they measure the relative magnitude of the problems. Most of the projects have had separate teams to analyze risks to human health, to ecosystems, and to what is variously called social welfare, the quality of life, or simply societal impacts. Vermont's approach to the "quality of life" question illustrates how states have broadened the analysis from its original technical basis.
The example of Vermont's values
The Vermont Agency of Natural Resources asked participants in its 1989 project to answer the open-ended question, "What environmental problems pose the most serious risks to Vermont and Vermonters?" A public advisory committee (PAC) of volunteers—characteristic of state projects—began its work by asking Vermonters what they thought the most serious problems were and why: What was it about each problem that made it objectionable? The answers came back through eleven public forums, as well as more than 400 responses to a survey designed to elicit Vermonters' values and perceptions.
Vermonters often said that they abhorred problems that threatened their own health and that of future generations and Vermont's ecosystems, that are unfairly imposed on people, or that threaten property values or their ability to relate to their land the way their families had for generations. Through these answers, Vermonters defined risks in terms of Vermont values and gave the PAC a sense of the relative importance of the values. The PAC then consolidated the responses in a set of seven criteria for evaluating the impact of the problems on the state's quality of life: aesthetics, future generations, sense of community, recreation, peace of mind, fairness, and economic well-being. The latter two criteria are illustrative.
Early rhetoric about CRA was that "risk" could be a "common metric" for comparing environmental problems. State practitioners discovered that no single metric would suffice.
Vermont's project was the first to make fairness an explicit consideration. Residents had told the PAC that they cared deeply about the distribution of risks and benefits. From this came a working definition of fairness that captured much of the outrage that people feel about "involuntary" risks. With a little critical thinking, the participants found the fairness criterion remarkably easy to apply, which suggests that other projects might use similar criteria to consider how different problems affect poor or minority communities.
Vermont approached its economic well-being criterion much as EPA had and as the states of Washington and Colorado had in their welfare analyses: economists attempted to capture the costs that each problem was creating in the state. These conventional techniques satisfied neither the staff economists nor the advisory committee. In only a few cases were the economists willing to add up their damage estimates for a problem and present the result as a bottom line: the analysts simply didn't believe that their numbers would convey an accurate picture of reality because so much of the picture couldn't be filled in.
Washington and Colorado also had thrown out or played down most of the economic analyses they commissioned for their projects, out of a lack of confidence in the numbers and a fear that the numbers would drive all subsequent decisions and that the familiarity and apparent simplicity of dollars would make it too simple to compare dissimilar problems and dissimilar risks. The early rhetoric about CRA was that "risk" could be a "common metric" for comparing environmental problems. Practitioners discovered that no single metric—not even dollars—would suffice. Seen in this light, Vermont's quality of life analysis was an attempt to organize relevant data on as few different scales as possible, but no fewer.
The benefits of comparative risk
One of NCCR's findings was that among the most important outcomes of the initial state projects was a more sophisticated and cohesive staff. The participants better understood their own programs and those of colleagues: how, for example, a waste division's policy on the incineration of used motor oil might affect air and water quality. The state projects suggest that this educational process makes for better public management, though no one has attempted to quantify the results.
The strength of the comparative risk process is in framing important public policy questions and engaging people to productively answer them. Its weakness is that so many of the answers are uncertain, or unwelcome, or both.
The strength of the comparative risk process appears to be its capacity to frame important public policy questions and to engage people in a productive attempt to answer them. Its weakness is that so many of the answers are uncertain, or unwelcome, or both. As used by the states, comparative risk approaches have added depth to policy debates and helped decisionmakers set priorities, both in times of budgetary expansion and contraction. (Washington's Department of Ecology, for instance, used the knowledge from its project to target cuts to minimize their impacts, an approach far superior to the more typical across-the-board squeeze.)
With or without comparative risk projects, states and cities have continued to make environmental investments in response to federal requirements and public expectations. In addition, though, these projects have brought together—often for the first time—scientists and laypeople, industrialists and environmental activists, bureaucrats and the people they are paid to serve, state regulators and their federal counterparts.
CRA has added depth to policy debates and helped decisionmakers set priorities.
Participants often leave with deep new insights into both their natural and political environments, and can continue to influence environmental management decisions from town halls to Congress. Indeed, the experiences gained by states and cities now inform successive projects as the states continue to assert their own competence to set priorities and manage environmental problems.
Richard A. Minard Jr. is associate director of the Center for Competitive Sustainable Economies at the National Academy of Public Administration in Washington, D.C., which produced the recent report, Setting Priorities, Getting Results: A New Direction for the EPA. The former director of Vermont's comparative risk project, he was also the founding director of the Northeast Center for Comparative Risk at Vermont Law School. This article was adapted from the author's chapter in RFF's newly published book, Comparing Environmental Risks: Tools for Setting Government Priorities, edited by Terry Davies (see page 15).
A version of this article appeared in print in the January 1996 issue of Resources magazine.