252 – Ranking environmental projects 18: Simplifications
Episode 18 in this series on principles to follow when ranking environmental projects. It is about simplifications: the necessity of them in the ranking formula, the need for even greater simplification in some cases, and a couple of simplifications I’ve been making implicitly but haven’t previously mentioned.
Throughout this series, I’ve struck a balance between theoretical accuracy and simplifications to make the process more practical and less costly. Clearly, this balance involves judgement. Others might judge that more or fewer simplifications are appropriate, or prefer different simplifications than the ones I’ve recommended. One thing they would have to agree on, though, is that simplifications are essential to make the system workable. The ones I’ve recommended are carefully chosen, on the basis that they are unlikely to have serious impacts on the total level of environmental benefits generated by a portfolio of projects. In some cases, the careful choosing I’ve done is based not just on subjective judgement, but on numerical analysis.
Even with the simplifications I’ve suggested, the process is still rather information hungry. If dealing with a large number of potential projects, collecting all the information for all of the projects may be more costly than is warranted, especially if the level of funding available is small relative to the total cost of all potential projects. For example, I’ve worked with environmental bodies which had upwards of 500 potential projects to rank, but were likely to get funding for less than 5 per cent of them.
In this type of situation, it is justifiable to use an even more highly simplified approach initially to filter down the full list of 500 projects to a manageable number for more detailed (but still simplified) assessment. An approach I’ve found effective is to select a few of the most important variables (e.g. the importance or significance of the environmental assets affected; the likely technical effectiveness of management actions; the likely degree of cooperation with the project by those people or businesses whose behaviour would need to change). Each project is scored for each of these key variables on a simple three- or four-level scale (low, medium, high, or very high). Then one looks for projects with three scores of high or better. If that doesn’t provide a sufficient number of potential projects, loosen the criterion a bit: look for projects with two scores of high or better and one medium. Loosen or tighten as needed to get a workable number of projects to assess further. Projects that meet the criterion you end up settling on go through more detailed assessment using the BCR equation and the rest of the projects are put aside.
Clearly, with such a simplified process, there is a chance that good projects will be rejected or poor projects will be let through. As long as enough good projects get through the initial filter, missing some good ones is not likely to be a big problem. And as long as the projects that pass through the filter are subjected to the more detailed assessment, letting poor projects through is not a problem at all (apart from wasting some time) because they will be rejected following the detailed analysis.
Now let’s come back to the simplifications included in the detailed BCR calculation. Most of them have been spelled out in previous posts. Key simplifications that I judge to be reasonable in most cases include:
- Assuming that environmental benefits are linearly related to the proportion of people who adopt the desired new practices or behaviours;
- Representing project risks as binary variables: success or complete failure;
- Having only one time lag for all benefits from the project;
- Approximating the private benefits and voluntary private costs as zero; and
- Treating the project costs, maintenance costs and compliance costs as if there was only one combined constraint on their availability
There are also a few other simplifications that I haven’t mentioned so far, but which are implicit in the equations I’ve presented in earlier posts. I’ve had the first two of these pointed out by economists with eyes for theoretical detail, and the third by a colleague with a particular interest in this issue.
Firstly, I’ve been assuming that the value of an environmental asset does not depend on the conditions of other related assets. In reality, the benefits of project A could depend of whether project B is funded and, if so, there is no definitive ranking of individual projects. In practice, the error resulting from my simplifying assumption is likely to be small enough to ignore. Pretty much everybody who ranks environmental projects makes this assumption, and ignores any error. But if the issue is judged to be important enough to be worth accounting for, you could define a project that combines the activities of projects A and B into one project and compare it with project A and project B individually.
Secondly, if one assumes that projects are defined at a particular scale and cannot be scaled up or down, then ranking using the BCR may not be accurate because it doesn’t account for the risk of leaving some of the funds unspent. [This is known as the “knapsack problem”.] That’s true, but unless funding is sufficient for only a small number of projects, the loss from ranking using the BCR is likely to be very small. For example, Hajkowicz et al. (2007) estimated losses of between 0.3% and 3% in a particular program. And if you abandon the normally unrealistic assumption that the scale of each project is fixed, then the losses disappear almost entirely. When you factor in the transaction costs of building, solving and explaining a mathematical programming model to solve a knapsack problem properly, you would always rank by BCR.
Thirdly, the equations I’ve presented only measure benefits arising directly from the project. Graham Marshall pointed out that participation in a current project might also generate benefits for future projects by building mutual trust and networks amongst the participants (i.e., “social capital”). He even experimented with simple ways to estimate this benefit so that it could be added to the equation. Unfortunately, the feedback from participants in Graham’s experiments was that accounting for this benefit added significantly to the complexity of the process. Furthermore, my judgement is that, while these are real benefits, they are probably not usually large enough or different enough between projects to make a notable difference to project rankings. For that combination of reasons, I haven’t included them.
Hajkowicz, S., Higgins, A., Williams, K., Faith, D.P. and Burton, M. (2007). Optimisation and the selection of conservation contracts, Australian Journal of Agricultural and Resource Economics 51(1), 39-56. Journal web page ♦ IDEAS page
Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS