Using causal inference in environmental impact evaluation

Liz Law

More complex than BACI design, but you may not need as much information as you think

At the recent ICCB in Baltimore, I was lucky enough to score a place in the “Environmental impact evaluation and causal inference” workshop run by Paul Ferraro and Merlin Hanauer. I would highly recommend everyone working in conservation related fields to explore this field. Two of the main conclusions that I got out of this course were that: a) once again, my undergraduate education was flawed: before-after control-impact (BACI) is NOT the epitome of experimental design, particularly in the case of conservation impact evaluation, and b) to provide policy relevant information you may not need as much data as you think.

We’ve all heard the calls for evidence based environmental policy, and recognize the relative paucity of studies that evaluate conservation intervention effectiveness. Many have the common belief that the data for such evaluations is simply not available, often due to time and financial constraints as much as lack of motivation or will. Yet this belief may be constructed under false pretenses, a result of having BACI design celebrated so religiously through our undergraduate training. The field of causal inference and impact evaluation has long moved on.


Identifying causal effects

“Correlation does not imply causation and lack of correlation does not imply lack of causation.”

To identify causal effects we are really trying to eliminate rival explanations that may mimic or mask a relationship between a cause and an effect. We need to estimate the counterfactual, the “what would have happened” in the absence of an action or intervention. In real world examples this is often complicated by bias in selection of which actions are chosen, or the fact that only one unit was treated (e.g. see Coffman and Noy 2012).

For example, protected areas are usually places in areas of high natural beauty (be this biological or geological), but also low conflict with other land uses. Often the land selected for protected areas is steep, infertile, high, dry, and/or remote. This existing selection bias means that we can not estimate the impact of protected areas on logging, for example, by observing and comparing the rates of deforestation within parks and the surrounding matrix (even if we do this before and after park establishment as would be considered under a BACI design). This approach is likely to overestimate the effect of protected areas, as in the absence of protected area establishment the areas that would be selected as a park would have a lower rate of deforestation than areas not selected. That is, the counterfactual (areas selected for a park, if no parks were established) is unobservable.

Clearly recognizing this, however, is one way to a solution. It allows us to identify potential biases, and then include them in the study as heterogeneous variables (essentially multiple control groups). Using careful reasoning, clarifying assumptions, and detailed “matching” of sub-samples, allows partitioning of the variance/effect into that due to the establishment of protected areas, and that due to selection bias. Importantly, this approach clearly recognizes that the results still hinge on the assumptions that are made in the first instance.

Partial identification offers a way to utilize potentially sketchy data. Generally, it is only through increasingly strong assumptions that we can gain narrow estimates.  While scientists celebrate narrow margins of error, these are generally made with assumptions that may not be credible nor tenable in broader situations. Policy, on the other hand, may prefer to be broadly right, rather than precisely wrong. This can be used, for example, to estimate the range of potential effect (e.g. Ferraro et al 2012).

Applications of causal inference are inherently logical, but need a good dose of rational thinking and clarity of reasoning to execute well. I highly recommend everyone in conservation research at least have a little understanding of this field, and encourage undergraduate courses, particularly those dedicated to conservation and environmental management, to think about integrating some of these concepts and move away from the mantra of BACI design.


References and short reading list:

Coffman, M and I Noy. 2012. Hurricane Iniki: measuring the long-term economic impact of a natural disaster suing synthetic control. Environment and Development Economics 17: 185-205.

Ferraro, PJ (2009). Counterfactual thinking and impact evaluation in environmental policy. In M. Birnbaum & P. Mickwitz (Eds.), Environmental program and policy evaluation. New Directions for Evaluation, 122, 75–84.

Ferraro, PJ and MM Hanauer, (2011) Protecting ecosystems and alleviating poverty with parks and reserves: ‘Win–win’ or tradeoffs?” Environmental and Resource Economics, 48(2), 269.

Ferraro, PJ, Hanauer, MM and KR Sims, (2011), “Conditions associated with protected area success in conservation and poverty reduction.” Proc. Natl. Acad. Sci. U. S. A., 108(34), 13913–13918

Ferraro, PJ, McIntosh, C, and M Ospina. 2007. The effectiveness of listing under the U.S. Endangered Species Act: An econometric analysis using matching methods. Journal of Environmental Economics and Management 54(3): 245-261.

Ferraro, PJ, Pattanayak, S, Sills, RE, and S Cordero. 2012. Do payments for environmental services reduce deforestation? A farm-level evaluation from Costa Rica. Land Economics 88: 382-399.

Liscow, Z (2013), Do property rights promote investment but cause deforestation? Quasi-experimental evidence from Nicaragua. Journal of Environmental Economics and Management 65(2): 241–261.

Morgan, SL, and C Winship (2007), Counterfactuals and Causal Inference: Methods and Principles for Social Research


Randall Munroe

Randall Munroe