Episode 5 in this series on principles to follow when ranking environmental projects. This one is about how to estimate the value or importance of environmental assets, and how to include that in the ranking process.
We’ve seen that measuring the benefits of an environmental project requires attention to two aspects: the change in the physical condition of the environment, and the resulting change in the values generated by the environment (PD238). Suppose we have information about the change in physical conditions. How should we convert that to a measure of value or importance that we can use to rank projects? We need to do this in a way that is consistent between the different projects that we’ll want to compare.
Let’s consider three options, which are quite different in nature, but which are all actually used in real-world environmental programs.
(a) Scientific principles
Scientists often use rules of thumb (sometimes captured in an ‘Environmental Benefits Index’) to evaluate the relative importance of different potential environmental investments. An Australian example is the ‘habitat hectares’ concept, which is used by the state government in Victoria to evaluate proposed projects. A US example is the Environmental Benefits Index developed by the Natural Resources Research Institute (NRRI) of the University of Minnesota Duluth (http://beaver.nrri.umn.edu/EcolRank/). This consists of measures of soil quality risk, water quality risk and habitat quality, each scored out of 100, and then added up to give a total score out of 300.
Key strengths of this approach include:
- The index is based on relatively sound knowledge of the natural systems.
- Once the system has been developed, the approach is relatively efficient to apply to many potential projects.
But it also has some weaknesses:
- The resulting Index scores reflect the values of experts, and there is plenty of evidence that experts and the general community sometimes think differently about what is important.
- Environmental Benefits Indexes are set up to evaluate particular types of environmental benefits and cannot evaluate projects that generate different types of benefits. For example, the NRRI’s Index is no use for evaluating projects that protect threatened species or reduce air pollution. They can only rank projects of a reasonably similar type.
- Often Environmental Benefits Indexes are not designed in a way that allows the required with-versus-without the project comparison. The NRRI index is an example. Even if we know what difference the project will make to environmental condition, this index would not help us value that difference. This could potentially be addressed by improving the design of the Index, although that would require considerable effort and resources.
- Any system based on scoring, rather than dollars, cannot tell us whether the benefits of a project would exceed its costs. It can tell us how projects should be ranked, but not where the cut-off line should be for projects that are or are not worth funding. In most cases where projects are being ranked, this is not a serious problem because the overall budget is already determined. From a practical perspective, the relevant cut-off line is where the money runs out.
(b) Deliberative processes
‘A “deliberative process” is a process allowing a group of actors to receive and exchange information, to critically examine an issue, and to come to an agreement which will inform decision making’ (Gauvin 2009). It involves discussion, debate, and consideration of all information that is considered relevant. Multi-Criteria Analysis often employs this approach, although other approaches can use it as well.
- There is scope to involve both experts and community members to ensure that both perspectives are considered.
- The approach may be seen by stakeholders as being more transparent than the other approaches.
- There is an opportunity for participating non-experts to receive detailed information and to participate in discussion and debate about the issues. This means that the outputs are likely to be better informed and better considered than is possible in survey-based approaches.
- The approach is very flexible. All types of benefits and costs can be considered.
- It is possible to generate a large number of valuations relatively efficiently – certainly more cheaply than conducting non-market valuation surveys for each project.
- Participants may have vested interests or particular perspectives and may not reflect broader community interests or concerns.
- While the flexibility of the approach is an advantage up to a point, the lack of theoretical rigour can be a problem, resulting in project rankings that don’t actually reflect the participants’ own values. In other words, too much flexibility can be a problem, particularly if the process goes beyond just looking at values. For example, when it comes to ranking projects, participants should not be free to choose to include costs in any way other than by dividing them into benefits (see PD236). Some things that people often choose to do in this space are just wrong (which is why I’m writing this series).
- If the output is a score, rather than a dollar value, the approach cannot tell us whether the benefits of a project would exceed its costs.
(c) Dollar values
- Of the three approaches, this one is likely to best reflect broad community attitudes. It is more independent and less at risk of reflecting the preferences of vested interest groups.
- It allows comparisons across completely different types of benefits.
- It is more rigorous – less ad hoc than scoring-based approaches.
- It allows us to determine whether the benefits of a project outweigh its costs.
- Respondents to non-market valuation studies may know very little about the things they are being asked to value.
- Conducting separate valuation studies for each project would be prohibitively expensive. Transferring benefit estimates from other similar projects can help to overcome this problem.
- The survey-based methods have been criticised by some economists for relying on hypothetical questions and for giving results that don’t seem plausible in some cases. While this debate is interesting, in practice the quality of information from these surveys is probably higher than some other information we need to include in the process. For example, information about the cause-and-effect relationship between management and environmental conditions is often very weak indeed.
Which is best?
Some people are quite definite in their preferences for one or another of these approaches, or particularly dislike one of them. In my view, it’s not a clear-cut decision. They each have pros and cons, and one’s choice of which to use may vary depending on the circumstances. The weaknesses that concern me most are: the inability of many Environmental Benefits Indexes to compare outcomes with and without the project; the excessive flexibility of some deliberative approaches, giving participants the flexibility to do dumb things; and the expense of doing comprehensive valuation surveys.
My advice is to weigh up the pros and cons and use whichever approach makes most sense for a particular program. My caution would be that this advice applies specifically to the part of the process that estimates values. For the other parts of the process, and for decisions about how to combine the various bits of information to inform decisions, see the other posts in this series.
A practical compromise
In developing INFFER (Pannell et al. 2012) we attempted to create an approach to valuation that draws on the combined strengths of the three approaches outlined above, while limiting their weaknesses. The approach we developed:
- Can use scientific information if it is available
- Recognises that the relevant benefit is a difference (with minus without the project)
- Can be elicited in a deliberative process involving both experts and community members
- Can be cheap and quick enough to be practical in cases where there are limited resources for estimating values, or where many valuations are needed in a short time
- Provides dollar values
- Can use results from non-market valuation surveys if available
Here is how it works. Define P’ as the physical condition of the environmental asset in good condition. For example, it could be an environmental condition of 100 in Figures 4 and 5 (PD238).
Now V(P’) is the value of the environmental asset at condition P’. It includes all the different types of values (market and non-market) that are relevant to this environmental asset. In Figures 4 and 5, if P’ = 100, V(P’) would be $1 million.
Finally, define W as the difference in values between P1 (physical condition with the project) and P0,(physical condition without the project) as a proportion of V(P’).
Then we measure the project benefit as V(P’) × W:
So V(P’) × W is equivalent to the correct measure of benefits, V(P1) – V(P0) (as outlined in PD238).
The benefit of re-organising the benefits into V(P’) and W is that, in my experience, it helps people think clearly and ask the right questions in a situation where they are not going to conduct a non-market valuation survey. V(P’) sets an upper bound for the benefits of the project – obviously, the value of the project can’t be more than the value of the environmental asset in good condition.
In INFFER, we ask users to score V(P’) relative to a set of examples – a table of well-known environmental assets with suggested V(P’) scores. We define V(P’) as being worth $20 million per point. This is often done in a group discussion environment, involving a variety of stakeholders.
A risk with this (and other deliberative processes) is that people may provide values that are too high (e.g. see PD213). A process of reviewing assumptions and comparing them across projects is needed to reduce this risk.
Defining W as a proportion of V(P’) helps to highlight that the benefits of the project must be proportional to the effectiveness of the project, which is often missed when people develop their metric for ranking projects.
For example, suppose there are two alternative projects for Asset A. Project (i) would increase the asset value by a factor of 0.3 and Project (ii) would increase it by 0.6. If everything else is equal, Project (ii) would generate benefits that are twice as large as those from Project (i). The metric has to reflect that. This is achieved by multiplying by W.
Finally, a mistake I’ve seen is to exclude any measure of values from the ranking process. One senior bureaucrat told me that she was opposed to including them because of the risk of them generating controversy. At other times, people seem to simply overlook them. The consequence of this is that the organisation will tend to bias its funding towards less valuable projects. There is an increased risk that they will incorrectly rank projects addressing less-valuable assets relative to more-valuable assets.
Gauvin, F.-P. (2009). What is a Deliberative Process? National Collaborating Centre for Healthy Public Policy, Quebec, http://www.ncchpp.ca/docs/DeliberativeDoc1_EN_pdf.pdf
Pannell, D.J., Roberts, A.M., Park, G., Alexander, J., Curatolo, A. and Marsh, S. (2012). Integrated assessment of public investment in land-use change to protect environmental assets in Australia, Land Use Policy 29(2): 377-387. Journal web site here ♦ IDEAS page for this paper
Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research 40(2), 126-133. Journal web page ♦ Pre-publication version at IDEAS