TCS Daily


To Control Risks, Use Precaution With Care, Says Risk-Expert Charnley

By Kenneth Green - February 19, 2001 12:00 AM

Though it rarely makes the front page of most newspapers, an international debate is raging over the approach societies take to controlling environmental risks. The most obvious environmental risks include potential exposure to chemicals that have been released into the air, water, soil, or food supply.

But environmental risk transcends direct exposures in several areas. "Environmental risk" is also used to refer to threats faced by ecosystems or animal species stemming from human activity, as well as the risk to humanity of losing the supporting services of a damaged ecosystem. Thus, the way air pollution or global warming issues are managed are often expressed in risk management terminology, but so is the preservation of species, the use of genetically modified plants in food, the protection of groundwater and more.

At one pole of the debate over how society manages risk are those who support a scientific approach to risk management that allows setting priorities for risk-management resources. Under such an approach, regulations are generally only favored if rigorous scientific testing has demonstrated that an alleged risk is real, and that the proposed regulatory remedy would do more good than harm. This approach is generally called a "risk-based" approach, since the first activity is making a scientific assessment of risk.

At the other pole of the debate are environmental advocacy groups such as Greenpeace, who promote an alternative principle of using regulation before an alleged risk has been scientifically demonstrated, and before the impacts of the regulatory remedy have been scientifically analyzed. This approach is often characterized as embodying "the precautionary principle," reflecting the fact that regulations are crafted based on the principle of precaution, not in response to a demonstrated risk or hazard.

Yet another dimension of the debate over managing risks revolves around children, and how they are best protected. Some have argued that children, because they are rapidly growing and developing, are more susceptible to harm from exposure to environmental chemicals. Former EPA Administrator Carol Browner was a proponent of this view, and the Office of Children's Health Protection was created based upon this presumption, that children require additional levels of protection. Others, including Dr. Gail Charnley, former executive director of the Presidential/Congressional Commission on Risk Assessment and Risk Management, whose interview follows, have observed that the situation is more complex, and that, in fact, children may be more, less, or equally susceptible to harm from environmental chemicals as are adults. This view suggests that current regulatory approaches already protect children and that calls for more stringent regulation should be considered on a case-by-case basis.

A toxicologist and risk management expert, Dr. Charnley holds an adjunct faculty position at the Harvard Center for Risk Analysis and is immediate past-president of the international Society for Risk Analysis. She generously agreed to share her insights about risk-management with Tech Central Station readers. Here is what she told me:

Green: Based on your work with the Commission on Risk Assessment and Risk Management, what would you say are the biggest problems with the way environmental risk is assessed and managed in the United States today?

Charnley: The biggest problem with the way environmental risks are managed in the United States today is the chemical-by-chemical, environmental medium-by-environmental medium, risk-by-risk approach dictated by current statutes and regulations. This fragmented approach evolved as a result of divided responsibilities and jurisdictions within Congress, which in turn created separate programmatic fiefdoms within EPA. As a result, most of our current regulatory approaches are "bottom-up," that is, they start with a potential cause and then try to eliminate it without determining the extent to which it actually may contribute to a problem. A bottom-up approach makes it difficult to set priorities among risks or to evaluate whether a risk management action has had an impact on a public health problem.

Instead, we need to refocus our priorities by taking a broader view that looks at risks more comprehensively. The chemical-by-chemical approach has given us an important basis for environmental protection and should not be abandoned. However, to reach the next level of environmental protection, we need to stop arguing about what numerical regulatory standards should be for each chemical and ask, what exposures are posing the most immediate threats to our health and our environment, and how can we control them?

To that end, the Risk Commission and others have recommended a public health approach to environmental risk management. A public health approach to risk management can help to assess aggregate risks and to target risk management resources by focusing attention on the health effects experienced by a population and the relative contributions of different pollutant sources or other problems to those effects. A public health approach is a "top-down" approach that starts by focusing on a problem and then seeks to identify what's causing the problem as a guide to determining how best to solve it. A public health approach emphasizes prevention instead of cleaning up after the fact and focuses on the effectiveness of actions instead of relying on regulatory command and control. The future of environmental health risk management will depend on our ability to move beyond our chemical-by-chemical focus and learn how to look at risks collectively instead of one at a time.

Green: Traditionally, activities that might pose a risk to others have been subject to regulation only after a clear demonstration that such a risk exists, a demonstration generated through the process of quantitative risk analysis. Recently, however, some advocacy groups argue that activities should be regulated if claims of increased risk exist, even without clear evidence demonstrating the reality and magnitude of that risk. What is your opinion of this "precautionary principle" as a risk-management tool? Do you think it would improve risk management in the United States?

Charnley: Actually, in a regulatory sense, the precautionary principle guided environmental health risk management decision-making in the United States for many years. For example, in the 1950s, the zero-risk Delaney clause required the Food and Drug Administration to ban outright food and color additives that had been shown to produce tumors in humans or laboratory animals. In the 1970s, a legal basis for the precautionary principle was established by the Ethyl decision involving banning leaded gasoline. With that decision, the Supreme Court acknowledged that EPA (Environmental Protection Agency) could make decisions based on policy even if they weren't completely supportable scientifically. In 1980, however, the Benzene decision overturned the precautionary policy basis of the Ethyl decision and substituted a risk-based principle. The Benzene decision struck down a workplace standard for benzene exposure that was based on a policy of trying to reduce concentrations of benzene as far as technologically possible without considering whether existing concentrations posed a significant risk to health. The Supreme Court decided that benzene could be regulated only if it posed a "significant risk of harm." Although the Court did not define "significant risk of harm" and stressed that the magnitude of the risk need not be determined precisely, the decision strongly implied that some form of quantitative risk assessment is necessary as a basis for deciding if a risk is large enough to deserve regulation.

So the United States has had a long history of applying the precautionary principle in regulation, but it has moved away from doing so in response to the Supreme Court and to congressional statutes that call for limits on exposure that will "protect the public health with an adequate margin of safety" or lead to "a reasonable certainty of no harm." Those statutes called on the regulatory agencies to develop means to assess risks so as to define exposure levels that would achieve the stated qualitative goals of health protection.

The precautionary principle recognizes the fundamental role of uncertainty in decisions about environmental health protection and attempts to shift the burden of ignorance towards precaution rather than inaction. In my view, the precautionary principle is not a substitute for risk-based decision-making, however. Risk assessment, after all, provides just part of the information used to protect public health and the environment. The extent to which the precautionary principle is applied in decision-making depends partly on the confidence that can be placed in a risk assessment, but also on the nature and severity of the risk of concern, the likelihood that new data would change a risk management decision, the effectiveness and feasibility of the risk management action under consideration, and a wide variety of other considerations, such as politics, public health, economics and the law. The danger I see is that the precautionary principle will be used as a license to ignore these other elements of risk management decision-making.

To a great extent, the risk-versus-precaution battle is really just the newest skirmish in the age-old battle between science and ideology. Applied moderately, the precautionary principle is just common sense. But because scientists can never prove absolutely that something bad might not happen, the precautionary principle is sometimes carried to extremes and becomes an ideology. When that happens, science is ignored and emotional and financial resources are diverted towards worrying about every potential risk, no matter how far-fetched. When used judiciously and constructively, the precautionary approach can be a useful component of decision-making and priority-setting. When used in the absence of considerations of risk, it promotes fear and politicizes science.

Green: Risk issues are frequently debated in terms of the impacts that certain substances, activities, or exposures might have upon children. You've studied this question extensively: Are children particularly susceptible to environmental exposures to pollution, chemicals, electromagnetic fields, and so on? How should this be accounted for in regulation?

Charnley: Insults that occur during development in utero or during childhood can have tragic consequences in terms of birth defects and greater likelihood of disease throughout both childhood and adulthood, placing great demands on social and emotional resources. And although the proportion of birth defects and other problems attributable to environmental exposures to chemicals is likely to be small, it could constitute a public health problem by virtue of the numbers of people affected.

The available evidence on age-related susceptibility to the effects of chemical contaminants indicates that children may be more than, less than, or just as sensitive as adults, depending on the chemical and the exposure situation. Studies show that evaluating the relative sensitivity of children and adults to chemical toxicity must be done on a case-by-case basis. It is not true that children are always more susceptible to chemical toxicity than adults. There are numerous examples of exposures to which children are more susceptible than adults and numerous examples of exposures to which children are less susceptible. Generalizations are difficult; the only unifying principle that has emerged thus far is: "It depends."

The potential benefits and costs of more stringent regulation to protect children should be weighed carefully. A precautionary approach that increases stringency on the basis that it is better to be safe than sorry implies that current regulatory strategies fail to protect children's health. Yet, there is little evidence that environmental exposures play a significant role in childhood disease nor is there evidence that where such exposures do play a role, that more stringent regulation would be preventative. More targeted strategies that address known threats to children's health are likely to have more apparent benefits.

Green: Some risk analysts argue that the cost of regulation can itself become a risk, when it results in economic underperformance, job losses, or reductions in disposable income. Do you agree with this assessment? How should such "side-effects" of regulations meant to reduce risk be accounted for in the regulatory process?

Charnley: To my knowledge, there is little evidence that greater regulation leads to job loss or reductions in disposable income. Often, greater regulation changes the nature of available jobs, increasing technology jobs, for example, in response to new needs such as methods of manufacturing that use less energy or produce less waste. A bigger risk of inefficient regulation is ineffective public health protection, with large amounts of money devoted to regulating insignificant risks in inefficient ways, leaving known public health risks un- or poorly addressed. Real health and environmental problems -- like understanding heart disease in women, reducing asthma in children, improving minority access to medical care, and preventing habitat destruction -- are often either under funded, overlooked, or on the back burner. Misallocation of resources that could be used to protect public health and the environment is a priority-setting problem that itself threatens public health and the environment. As I discussed in my answer to an earlier question, we should be asking ourselves, what are the problems that are making the biggest contributions to death and disease and how can we prevent them? In other words, instead of relying on media-influenced public opinion to decide how to spend our resources, let's start with what we know and what the science tells us.

At the same time, risk decisions are, ultimately, public policy choices. It's the job of the risk assessor to bring as much relevant science to the participants in a risk management decision, whose job it is to make the value-laden choices. Risk assessment is used along with political, cultural, economic, and other considerations to make decisions about what are the best ways to manage risks. Communicating effectively about risks only a part of the puzzle. More important is communicating effectively about the risk-risk tradeoffs and benefits that can result from regulatory actions, which are areas in which risk assessment can play an important role.

Green: What has been the biggest "success" of quantitative risk analysis in terms of improving people's quality of life in the last 20 years? The biggest failure?

Charnley: Risk analysis is the practice of using observations about what we know to make predictions about what we don't know. Risk analysis is our way of acknowledging that the future need not be completely random -- "in the hands of fate" -- and that we can try to control it by using models of the past to guide our decisions. Risk analysis is the vehicle for including science in decision-making about managing risks to our health, safety, and the environment. However, science is always uncertain, so risk analysis relies on science-based judgment to develop models in order to make predictions about what we don't know. Unfortunately, human decisions inevitably have unforeseen consequences, reflecting the basic underlying reality of what Herbert Simon called "bounded rationality": the idea that our individual human brains are much less complex than the external reality they are attempting to model. The fact is, we are only human and cannot really predict the future or anticipate all possible long-term outcomes of a risk management decision, even when risk analysis is deployed to its fullest. This phenomenon is sometimes described as the "law of unintended consequences," which is the idea that any decision we make to manage a complex system will lead to unforeseen consequences at other points in the system.

One well-known example is that of coal-fired power plants in the Midwest. In the 1960s and early 1970s, engineers designed large power plants to incorporate the "best available control technology" of the time: tall stacks extending 1,000 feet or more into the sky. This approach was modeled on what we had learned from the international leader in controlling air pollution in the 1940s and 1950s, London, England. Unfortunately, while this approach minimized one risk, it created another: the acid rain of the 1980s and '90s and the long-distance transport of pollutants to the Northeast.

DDT is an example where risk analysis failed to predict the widespread ecologic impacts that resulted from its application to control pests. On the other hand, now that they are better understood, the risks from DDT are still considered acceptable in some parts of the world where malaria is the alternative, unacceptable risk.

Quantitative risk analysis has given us a framework to organize the scientific information that can be brought to bear on a risk decision, a way to clarify and communicate what is or is not known scientifically about a risk, and a way to compare risks in ways that can provide useful insight to decision-making. At the same time, quantitative risk analysis has kept us focused on assessing single chemicals in single environmental media and has constrained our ability to think about risks more comprehensively. Quantitative risk analysis will always contribute useful information to risk management decisions, but should not dictate risk management actions. Risk management needs to become more outcomes-based, with health and environmental risk management problems guiding the questions that risk analysis asks, not vice-versa.

Green: If you could mandate three changes in the way environmental risks are managed in the United States, what would those changes be?

Charnley: One, recast risk analysis as a decision-driven activity directed toward informing choices and solving problems. Risk analysis should not be performed as a stand-alone activity; it must be performed as part of a risk management decision-making process. Risk analysis should enhance practical understanding and illuminate practical choices. In other words, start with a risk management problem and use that problem to guide risk analysis.

Two, target finite risk management resources more effectively by demonstrating that they're having an impact -- that they are, in fact, improving public health and environmental quality. To do so, develop the environmental health data and infrastructure needed to provide a scientific basis for identifying, responding to, and preventing the component of chronic disease incidence that is attributable to environmental exposures. Unless and until adequate data from basic research, environmental monitoring, and public health surveillance are available, conclusions about chemically induced disease and the effectiveness of chemical regulation will remain speculative.

Three, strengthen the centrality of science as a guide to risk management while preserving democratic decision-making. Good science is a necessary -- in fact, an indispensable --basis for good risk management, but it is not a sufficient basis. Science gives us a factual basis for making decisions and for judging the extent to which precaution may be required. Relying on broadly based, deliberative risk management decision-making processes can help guide scientific analysis and clarify the role that science should play in a risk decision.
Categories:
|

TCS Daily Archives