In ancient Egypt, the Nile River could yield its bounteous flood for 30 years in succession, and then have two dry years in which all harvests failed. If the ancient Egyptians knew in advance exactly when the Nile would fail to flood, they would not have needed scribes, taxation, writing, calculations, surveying, geometry, or astronomy. Civilization owes much to risk. Without uncertainty there is no risk, only adversity.
Risk is a wily adversary, obliging us to relearn the same lessons over and over. Why do we build flimsy upscale houses in the paths of hurricanes? Why does the lure of short-term gain foil the best minds in business and finance? Why do we bet the farm on "slam dunk" assessments, despite evidence to the contrary? In short, why don't we learn to manage risk and uncertainty?
For the ancient Egyptians, it was a matter of detailed recordkeeping and building storehouses of grain to prepare against drought. In modern times, we have new tools and approaches for measuring and quantifying risk, as this brief history of modern quantitative risk analysis outlines. In the interest of brevity, we'll focus here on the three dominant "actors."
Aerospace
A systematic concern with a new form of quantitative risk assessment called probabilistic risk assessment (PRA) began in the aerospace sector following the fire of the 1967 Apollo flight test in which three astronauts were killed. Prior to the Apollo accident, NASA relied on its contractors to apply good engineering practices to provide quality assurance and quality control. NASA'S Office of Manned Space Flight subsequently initiated the development of quantitative safety goals in 1969, but they were not adopted. The reason given at the time was that managers would not appreciate the uncertainty in risk calculations.
Following the inquiry into the Challenger accident of January 1986, we learned that distrust of re-assuring risk numbers was not the only reason that PRA was abandoned. Rather, initial estimates of catastrophic failure probabilities were so high that their publication would have threatened the political viability of the entire space program. Since the shuttle accident, NASA has instituted quantitative risk analysis programs to support safety during the design and operations phases of manned space travel.
Nuclear Power
Throughout the 1950s, following President Eisenhower's "Atoms for Peace" program, the American Atomic Energy Commission pursued a philosophy of risk management based on the concept of a "maximum credible accident." Because credible accidents were covered by plant design, residual risk was estimated by studying the hypothetical consequences of "incredible accidents." An early study released in 1957 focused on three scenarios of radioactive releases from a 200-megawatt nuclear power plant operating 30 miles from a large population center. Regarding the probability of such releases, the study concluded that no one knows how or when we will ever know the exact magnitude of this low probability.
Successive design improvements were intended to reduce the probability of a catastrophic release of the reactor core inventory. Such improvements could have no visible impact on the risk as studied with the above methods. On the other hand, plans were being drawn for reactors in the 1,000-megawatt range located close to population centers, developments that would certainly have had a negative impact on the consequences of an incredible accident.
The desire to quantify and evaluate the effects of these improvements led to the introduction of PRA. While the earlier studies had dealt with uncertainty by making conservative assumptions, the goal now was to provide a realistic assessment of risk, which necessarily involved an assessment of the uncertainty in the risk calculation. Basic PRA methods that were developed in the aerospace program in the 1960s found their first full-scale application, including accident consequence analysis and uncertainty analysis, in the 1975 Reactor Safety Study, published by the Nuclear Regulatory Commission (NRC).
The study caused considerable commotion in the scientific community, so much so that Congress created an independent panel of experts to review its achievements and limitations. The panel concluded that the uncertainties had been "greatly understated," leading to the study's withdrawal.
Shortly after the Three Mile Island accident, a new generation of PRAs appeared in which some of the methodological defects of the Reactor Safety Study were avoided. The NRC released the Fault Tree Handbook in 1981 and the PRA Procedures Guide in 1983, which shored up and standardized much of the risk assessment methodology. An authoritative review of PRAs conducted after Three Mile Island noted the necessity to model uncertainties properly in order to use PRAs as a management tool.
A 1991 set of NRC studies known as NUREG 1150 used structured expert judgment to quantify uncertainty and set new standards for uncertainty analysis, in particular with regard to expert elicitation. Next came a U.S.–European program for quantifying uncertainty in accident consequences models. Expert judgment methods, as well as screening and sensitivity analysis, were further elaborated. European studies building off this work apply uncertainty analysis to European consequence models and provide extensive guidance on identifying important variables; selecting, interviewing, and combining experts; propagating uncertainty; inferring distributions on model parameters; and communicating results.
National Research Council
The National Research Council has been a persistent voice in urging the government to enhance its risk assessment methodology. A 1989 report entitled Improving Risk Communication inveighed minimizing the existence of uncertainty and noted the importance of considering the distribution of exposure and sensitivities in a population. The issue of uncertainty was a clear concern in the National Research Council reports on human exposure assessment for airborne pollutants and ecological risk assessment. The 1994 landmark study Science and Judgment gathered many of these themes in a plea for quantitative uncertainty analysis as "the only way to combat the 'false sense of certainty,' which is caused by a refusal to acknowledge and (attempt to) quantify the uncertainty in risk predictions."
The 2003 National Academy of Sciences report Estimating the Public Health Benefits of Proposed Air Pollution Regulations identified three barriers to the acceptance of recent EPA health benefit analyses. These are: large amounts of uncertainty inherent in such analyses, EPA'S manner of dealing with them, and the fact that "projected health benefits are often reported as absolute numbers of avoided death or adverse health outcomes."
In 2006, the Office of Management and Budget released a draft bulletin proposing technical guidance for risk assessments produced by the federal government. A National Research Council review subsequently found many shortfalls in this proposal and recommended that it be retracted. A revision is currently in preparation. A recent National Research Council publication, Science and Decisions, attempts to advance risk assessment at EPA by harmonizing a diversity of approaches and methods.
The amateurism and shortsightedness displayed during Hurricane Katrina, and still evident in the aftermath, might suggest that 5,000 years of civilization have taught us nothing about risk. Not true—we have learned a great deal about risk, as the articles in this special issue attest. However, the more we learn, the more complex are the assets we put at risk. The question is not are we learning, but are we learning fast enough? Does our understanding of risk keep pace with the risks we ourselves create?