TOXIC SUBSTANCES control today a broad governmental endeavor involving many federal agencies and programs. There are about two dozen regulatory statutes, administered by more than a half-dozen agencies, that cover one or another means through which chemicals can threaten human health or the environment, for example, workplaces, air and water pollution, pesticides, food additives, drugs, cosmetics, consumer products, waste disposal, and transport accidents.
In 1978 there were many heated controversies about the policies and actions of these agencies. These controversies are too numerous and complex to cover here, but out of them emerge several major themes.
Coping with the weight of the numbers. First, the agencies are beginning to recognize and respond to the magnitude of the problem—the sheer numbers of chemicals whose safety must be evaluated and for which standards or other regulation must be considered. A few figures convey the weight of the numbers and the agencies' lack of success in dealing with it to date. About 70,000 chemical substances are already in commercial use. Between 1,000 and 2,000 new chemicals come into use each year. Relatively few chemicals have been adequately screened for hazards to human or natural systems. For most, little is known about the magnitude of human and environmental exposures.
Although most chemicals probably pose no serious risks, we already know of several thousand substances that can cause grave health and environmental harm. The most prominent effect is the "slow epidemic" of cancer. Other important effects include mutations, birth defects, sterility, respiratory and circulatory diseases, nervous disorders, changes in animal reproduction and plant growth, and even changes in the global climate.
Against this backdrop, the handful of chemicals regulated to date is disappointingly small. For example, in eight years the Occupational Safety and Health Administration (OSHA) has set standards for fewer than thirty chemicals causing cancer or other serious illnesses in workers. The Environmental Protection Agency (EPA) has set only four hazardous air pollutant and six toxic water pollutant standards, and has regulated only a handful of drinking water contaminants and pesticides. The output of other agencies is similarly small.
Over the last several years the agencies have taken modest steps to increase their output, largely under pressure from environmental and consumer groups and from organized labor. The EPA, for example, has proposed a carbon filtration requirement for drinking-water systems that will reduce levels of several hundred organic contaminants found in most cities' tap water. The agency has committed itself to regulating at least 65 other toxic water pollutants (or groups of pollutants) released from 21 major industrial source categories. It is considering a similar commitment to regulating airborne carcinogens released from factories.
OSHA has attempted a more ambitious undertaking to speed the regulation of carcinogens. In late 1977, recognizing that its current chemical-by-chemical approach would never allow it to cope with the 1,500 to 2,000 potential carcinogens found in workplaces, OSHA proposed a generic approach to carcinogens (described in last year's Highlights). The agency wishes to resolve in a single action a number of issues about the scientific conclusions that may be drawn and about the control actions that must be taken when animal tests or other evidence demonstrates a cancer risk. The proposal was debated extensively in several months of hearings in mid-1978, and a heated, less visible debate continues within the administration. The major issues are (1) how OSHA will determine which substances to tackle first, and (2) how OSHA will determine how stringently to regulate them.
In some other programs, however, efforts to deal with the weight of the numbers have fallen on hard times, and in some areas they have not even been begun. The Consumer Product Safety Commission (CPSC) proposed a generic approach to carcinogens similar to OSHA's in June 1978. However, a court in Louisiana has enjoined the CPSC from using its generic approach, on the grounds that the commission did not observe the proper administrative procedures in establishing it.
The most troubling problems exist in the EPA's program to implement the Toxic Substances Control Act, which is in many ways the keystone of the regulatory scheme for chemicals. The agency, which has fallen far behind the statutory implementation schedule, has thus far failed in a major effort to set guidelines for testing new chemicals that would have drastically simplified the evaluation of the chemicals' safety and markedly increased the numbers of substances that the EPA could review and control. The agency also has not yet initiated the required program for testing selected chemicals already in use, a key element in the effort to catch up with the existing hazards. Even when the toxic substances program is fully operative, the agency expects to be able to set controls for fewer than a dozen substances per year.
Clearly, the agencies need to do much more to meet the weight of the numbers. This is particularly the case for effects other than cancer. More staff and resources would help, of course, but progress also depends on moving away from the chemical-by-chemical approach and toward generic actions that deal with many hazards at a time.
Coordination. The second theme of the past year is an increasing emphasis on coordinating the many fragmented regulatory and research programs related to toxic substances control. Two major interagency bodies were established in late 1977 and undertook certain tasks in 1978. The EPA, OSHA, CPSC, and the Food and Drug Administration (FDA) formed the Interagency Regulatory Liaison Group (IRLG). Through this body they are cooperating on regulation of some two dozen substances and groups of substances, including asbestos, vinyl chloride, and other "celebrity" substances that still have not been fully controlled. The IRLG is also working on certain information-sharing tasks, on the development of common guidelines for epidemiological and other studies, and on other projects. Another body, the Toxic Substances Strategy Committee, was formed at the request of President Carter in his 1977 environmental message. The Strategy Committee is composed of eighteen research and regulatory agencies and is chaired by the Council on Environmental Quality. The committee has undertaken coordination tasks similar to those of the IRLG.
Both bodies are working hard to develop a government-wide cancer policy and a generic approach to carcinogens. Because so many more agencies are involved and because the stakes are so high, this effort seems even more difficult than a single agency's choosing an approach for its use alone.
The dividends from a consistent government-wide approach to carcinogens, however, would be well worth the investment.
Unfortunately, there has been a significant problem of coordinating the coordinators. Considerable progress toward more efficient and consistent government policy is being made, but progress is hampered by the territorial jealousies common in all institutions, public or private.
Spurious quantification. One regressive development of the past year deserves mention: the trend toward what may be termed spurious quantification. Some commentators—notably the Council on Wage and Price Stability (CWPS), which reviews proposed regulations for the White House—have been urging the regulatory agencies to adopt certain highly quantitative techniques for risk and benefit assessment as guides to decision making. Other observers, however, believe that the advocates of these techniques are trying to squeeze more from the techniques than they can provide.
The advocacy of these quantitative techniques is part of a larger effort to force decisions to be made on the basis of "cost—benefit," "risk—benefit," or "cost—effectiveness" analysis. As is well known, the application of these forms of analysis to environmental policy problems runs squarely into problems of values and equity. These forms of analysis require making tenuous and controversial assumptions about (1) how to compare unmarketed commodities such as human life and health, the survival of other species, and the integrity of natural systems with economic values; and (2) fair distributions of benefits and burdens between contemporaries or between different generations. But even if these problems were resolved—by no means a likely prospect —the application of these forms of analysis to toxic substances control would still be rendered largely meaningless by fundamental weaknesses in the reliability of the quantitative techniques.
To understand what makes most quantification in this area spurious, one must appreciate the magnitude of the uncertainties in the toxic substances area. With regard to risks, even where a qualitative relationship is strongly indicated—for example, that a substance causes cancer in workers—good quantitative information about the size of the risk generally is lacking. The risk at various doses, particularly low ones, and the number of persons exposed at various levels usually are highly uncertain. The relative sensitivity of humans and experimental animals—from which most of the qualitative evidence derives—is generally unknown. Likewise, the impact of synergistic combinations of chemicals is usually unknown. In this context, quantitative risk estimates have incredibly broad confidence limits. For instance, the FDA and the National Cancer Institute estimated that between 600 and 700 persons may get bladder cancer each year from saccharin. According to the National Academy of Sciences, however, the number could be as low as 0.0007 or as high as 3,640 persons, a range of 10^7.
There are often similar uncertainties —usually not appreciated—in data on the economic consequences of stringent regulations. Long-term impacts and complex substitutions are often not well understood. For severe changes, the confidence limits of estimates may be nearly as wide as on the risk side. Yet economic assessments rarely specify their uncertainty.
A naive reliance on these quantitative estimates can lead one to make close distinctions that are completely unreliable. One might be tempted to treat saccharin as only half as dangerous as another chemical which is predicted to kill 1,200 to 1,400 persons annually. But in light of the massive uncertainties in the estimates for both substances, the distinction is nearly meaningless.
Notwithstanding the uncertainties, the proponents of the quantitative techniques frequently advocate assigning unjustifiably precise numbers to health risks and economic consequences and making unsupportably close distinctions between regulatory alternatives of different stringency and between which substances to regulate first. This approach has found acceptance in the recent decision of the Fifth Circuit Court of Appeals in American Petroleum Institute v. OSHA, a case overturning that agency's restrictions on occupational exposure to benzene. The court held that OSHA must supply a quantitative estimate of the risk reduction its standard would achieve and that the agency then must show the existence of a "reasonable relationship" between this benefit and the costs of controlling benzene emissions.
The advocates of this approach presume that it would make possible more intelligent resource allocation decisions than the rougher, largely subjective balancing approaches now in use. But if the estimates of the health risks and economic consequences each could be in error by a factor of as much as 10^7, then it is not easy to make meaningful comparisons of costs and benefits. In this light, the attempts at precise quantification are uninformative and possibly misleading.
Congress has consistently shown an awareness of the spurious quality of such quantification exercises. Aware that the human and environmental costs of waiting for certain knowledge may be very high, Congress has directed the agencies to act on the basis of suggestive but uncertain evidence—so-called precautionary regulation. While many of the laws require agencies to balance health and wealth interests, none requires the extreme kind of quantitative analysis discussed here. In the legislative histories of the 1976 Toxic Substances Control Act and the 1977 Clean Air Amendments (the most recent statements of policy in this area) Congress explicitly rejected such a requirement. Contrary to the Fifth Circuit, other courts have seen the need to avoid turning control decisions on such imperfect techniques. Currently, OSHA is seeking Supreme Court review of the benzene case.
The quantification techniques have a legitimate role at a coarser level of analysis. When the predicted risks or benefits differ by several orders of magnitude, distinctions become more reliable. Distinctions of such magnitudes might be a valid basis for sorting substances into several large categories of different priority for regulation, or for different stringencies of regulation. In light of the uncertainties, such rougher distinctions may be the best resource allocation decisions that can be made.
These three themes will remain important in 1979 and future years. How well the agencies respond to these problems will determine to a great degree how successfully they will control the health and environmental hazards of the Chemical age.
(Far a more positive view of benefit-cost analysis, see 'Environmental quality: Worth the cost?" page 8 of this issue.)