Monthly Archives: August 2013

252 – Ranking environmental projects 18: Simplifications

Episode 18 in this series on principles to follow when ranking environmental projects. It is about simplifications: the necessity of them in the ranking formula, the need for even greater simplification in some cases, and a couple of simplifications I’ve been making implicitly but haven’t previously mentioned. 

Throughout this series, I’ve struck a balance between theoretical accuracy and simplifications to make the process more practical and less costly. Clearly, this balance involves judgement. Others might judge that more or fewer simplifications are appropriate, or prefer different simplifications than the ones I’ve recommended. One thing they would have to agree on, though, is that simplifications are essential to make the system workable. The ones I’ve recommended are carefully chosen, on the basis that they are unlikely to have serious impacts on the total level of environmental benefits generated by a portfolio of projects. In some cases, the careful choosing I’ve done is based not just on subjective judgement, but on numerical analysis.

Even with the simplifications I’ve suggested, the process is still rather information hungry. If dealing with a large number of potential projects, collecting all the information for all of the projects may be more costly than is warranted, especially if the level of funding available is small relative to the total cost of all potential projects. For example, I’ve worked with environmental bodies which had upwards of 500 potential projects to rank, but were likely to get funding for less than 5 per cent of them.

In this type of situation, it is justifiable to use an even more highly simplified approach initially to filter down the full list of 500 projects to a manageable number for more detailed (but still simplified) assessment. An approach I’ve found effective is to select a few of the most important variables (e.g. the importance or significance of the environmental assets affected; the likely technical effectiveness of management actions; the likely degree of cooperation with the project by those people or businesses whose behaviour would need to change). Each project is scored for each of these key variables on a simple three- or four-level scale (low, medium, high, or very high). Then one looks for projects with three scores of high or better. If that doesn’t provide a sufficient number of potential projects, loosen the criterion a bit: look for projects with two scores of high or better and one medium. Loosen or tighten as needed to get a workable number of projects to assess further. Projects that meet the criterion you end up settling on go through more detailed assessment using the BCR equation and the rest of the projects are put aside.

Clearly, with such a simplified process, there is a chance that good projects will be rejected or poor projects will be let through. As long as enough good projects get through the initial filter, missing some good ones is not likely to be a big problem. And as long as the projects that pass through the filter are subjected to the more detailed assessment, letting poor projects through is not a problem at all (apart from wasting some time) because they will be rejected following the detailed analysis.

Now let’s come back to the simplifications included in the detailed BCR calculation. Most of them have been spelled out in previous posts. Key simplifications that I judge to be reasonable in most cases include:

  • Assuming that environmental benefits are linearly related to the proportion of people who adopt the desired new practices or behaviours;
  • Representing project risks as binary variables: success or complete failure;
  • Having only one time lag for all benefits from the project;
  • Approximating the private benefits and voluntary private costs as zero; and
  • Treating the project costs, maintenance costs and compliance costs as if there was only one combined constraint on their availability

There are also a few other simplifications that I haven’t mentioned so far, but which are implicit in the equations I’ve presented in earlier posts. I’ve had the first two of these pointed out by economists with eyes for theoretical detail, and the third by a colleague with a particular interest in this issue.

Firstly, I’ve been assuming that the value of an environmental asset does not depend on the conditions of other related assets. In reality, the benefits of project A could depend of whether project B is funded and, if so, there is no definitive ranking of individual projects. In practice, the error resulting from my simplifying assumption is likely to be small enough to ignore. Pretty much everybody who ranks environmental projects makes this assumption, and ignores any error. But if the issue is judged to be important enough to be worth accounting for, you could define a project that combines the activities of projects A and B into one project and compare it with project A and project B individually.

Secondly, if one assumes that projects are defined at a particular scale and cannot be scaled up or down, then ranking using the BCR may not be accurate because it doesn’t account for the risk of leaving some of the funds unspent. [This is known as the “knapsack problem”.] That’s true, but unless funding is sufficient for only a small number of projects, the loss from ranking using the BCR is likely to be very small. For example, Hajkowicz et al. (2007) estimated losses of between 0.3% and 3% in a particular program. And if you abandon the normally unrealistic assumption that the scale of each project is fixed, then the losses disappear almost entirely. When you factor in the transaction costs of building, solving and explaining a mathematical programming model to solve a knapsack problem properly, you would always rank by BCR.

Thirdly, the equations I’ve presented only measure benefits arising directly from the project. Graham Marshall pointed out that participation in a current project might also generate benefits for future projects by building mutual trust and networks amongst the participants (i.e., “social capital”). He even experimented with simple ways to estimate this benefit so that it could be added to the equation. Unfortunately, the feedback from participants in Graham’s experiments was that accounting for this benefit added significantly to the complexity of the process. Furthermore, my judgement is that, while these are real benefits, they are probably not usually large enough or different enough between projects to make a notable difference to project rankings. For that combination of reasons, I haven’t included them.

Further reading

Hajkowicz, S., Higgins, A., Williams, K., Faith, D.P. and Burton, M. (2007). Optimisation and the selection of conservation contracts, Australian Journal of Agricultural and Resource Economics 51(1), 39-56. Journal web page ♦ IDEAS page

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

251 – Ranking environmental projects 17: Uncertainty

Episode 17 in this series on principles to follow when ranking environmental projects. It is about uncertainty, how to account for it, and what to do about it. 

Uncertainty and knowledge gaps are unavoidable realities when evaluating and ranking environmental projects. The available information is almost always inadequate for confident decision making. Key information gaps often include: the cause-and-effect relationship between management actions and environmental outcomes; the likely behavioural responses of people to the project; and the environmental values resulting from the project – what is important or valuable about the environmental outcomes and how important or valuable are they?

It has been argued to me that uncertainty about the data and objectives is generally so high that it is not worth worrying too much about the procedure used to prioritise projects. Any procedure will do. If that was really true, no analysis could help with decision making – we might as well just draw projects out of a hat.


In fact, while it’s true that uncertainty is usually high, it’s not true that the ranking procedure doesn’t matter, particularly when you consider the outcomes across a portfolio of projects. Even given uncertain data, the overall environmental benefits of a program can be improved substantially by a better decision process. Indeed, environmental benefits appear to be more sensitive to the decision process than to the uncertainty. For example, I have found that there is almost no benefit in reducing data uncertainty if the improved data are used in a poor decision process (Pannell 2009). On the other hand, even if data is uncertain, there are worthwhile benefits to be had from improving the decision process.

This is certainly not to say that uncertainty should be ignored. Once the decision process is fixed up, uncertainty can make an important difference to the delivery of environmental benefits.

There are economic techniques to give negative weight to uncertainty when ranking projects. I’ve used them and I think they are great for research purposes. However, I don’t recommend them for practical project-ranking systems. They aren’t simple to do properly, so they add cost and potentially confusion.

Instead of representing uncertainty explicitly in the ranking equation, I suggest a simpler and more intuitive approach: rating the level of uncertainty for each project; and considering those ratings subjectively when ranking projects (along with information about the Benefit: Cost Ratio, and other relevant considerations).

Apart from its effect on project rankings, another aspect of uncertainty is the question of what, if anything, the organisation should do to reduce it. In my view, it is good for project managers to be explicit about the uncertainty they face, and what they plan to do about it (even if the plan is to do nothing). Simple and practical steps could be to: record significant knowledge gaps; identify the knowledge gaps that matter most through sensitivity analysis (Pannell, 1997); and have an explicit strategy for responding to key knowledge gaps as part of the project, potentially including new research or analysis.

In practice, there is a tendency for environmental decision makers to ignore uncertainty when ranking projects, and to proceed on the basis of best-guess information, even if the best is really poor. In support of that approach, it is often argued that we should not allow lack of knowledge to hold up environmental action, because delays may result in damage that is costly or impossible to reverse. That’s reasonable up to a point, but in my view we are often too cavalier about proceeding with projects when we really have little knowledge of whether they are worthwhile. It may be at the expense of other projects in which we have much more confidence, even though they currently appear to have lower BCRs. It’s not just a question of proceeding with a project or not proceeding – it’s a question of which project to proceed with, considering the uncertainty, environmental benefits and costs for each project. When you realise this, the argument based on not letting uncertainty stand in the way of action is rather diminished.

In some cases, a sensible strategy is to start with a detailed feasibility study or a pilot study, with the intention of learning information that will help with subsequent decision making about whether a full-scale project is worthwhile, and how a full-scale project can best be designed and implemented. A related idea is “active adaptive management”, which involves learning from experience in a directed and systematic way. Implementation efforts get under way, but they are done in a way which is focused on learning.

Particularly for larger projects, my strong view is that one of these approaches should be used. I believe that they have great potential to increase the environmental benefits that are generated. They imply that the initial ranking process should not produce decisions that are set in stone. Decisions may need to be altered once more information is collected. We should be prepared to abandon projects if it turns out that they are not as good as we initially thought, rather than throwing good money after bad.

As far as I’m aware, the sorts of strategies I’m suggesting here are almost never used in real-world environmental programs. Managers are never explicit about the uncertainties they face, there usually isn’t a plan for addressing uncertainty, projects are funded despite profound ignorance about crucial aspects of them, proper feasibility assessments are never done, active adaptive management is almost never used, and ineffective projects that have been started are almost never curtailed so that resources can be redirected to better ones. In these respects, the environment sector is dramatically different from the business world, where people seem  to be much more concerned about whether their investments will actually achieve the desired outcomes. Perhaps the difference is partly because businesses are spending their own money and stand to be the direct beneficiaries if the investment is successful. Perhaps it’s partly about the nature of public policy and politics. Whatever the reason is, I think there is an enormous missed opportunity here to improve environmental outcomes, even without any increase in funding.

Further reading

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16(2), 139-152. On-line version ♦ IDEAS page

Pannell, D.J. (2009). The cost of errors in prioritising projects, INFFER Working Paper 0903, University of Western Australia. Full paper (350K)

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

250 – Ranking environmental projects 16: Other cost issues

Episode 16 in this series on principles to follow when ranking environmental projects. It covers a couple of issues related to costs that didn’t fit in the previous posts. 

Sometimes people criticise the use of the Benefit: Cost Ratio (BCR) to rank projects, on the basis that that it can be manipulated to some extent by moving costs between the denominator and the numerator (e.g. Office of Best Practice Regulation, 2009; Jenkins et al., 2011). For example, suppose you have already calculated an initial BCR for a project, but now you find that there is an additional cost that should be included. You could do one of two things with that cost: you could subtract it from the numerator, resulting in smaller benefits in the BCR, or you could add it to the denominator, resulting in larger costs in the BCR. If benefits exceed costs even after accounting for the new cost, then subtracting the new cost from the numerator would result in a larger BCR than adding it to the denominator. 

vic_view3This criticism of using the BCR for ranking projects reveals a lack of understanding of the logic of the formula. To rank projects correctly, the costs that go in the denominator are the costs that would be drawn from a limited pool of funds. Any costs that are not drawn from a limited pool should, in principle, be subtracted from the numerator, rather than being added to the denominator. It is not correct to move costs arbitrarily between the two. There is a clear logic about which costs go where. It’s surprising how often this misconception is repeated, even by economists.

The second issue relates to the sharing of costs between different benefits. In PD243 I talked about how to assess projects that generate multiple benefits. A related issue I struck once was in a case where the organisation wanted to rank potential investments in a number of threatened species individually, even though they knew that the actions needed to protect one species would help to protect others as well. I can see why they would want to do this – it would be tidy to be able to create a ranked list of all the species.

The approach I suggested to them was to define S as the share of total costs that is attributable to the current species, and base it on the share of benefits. You would add up all the benefits for different species resulting from the actions taken to protect this species, and then ask what share of the benefits belongs to this species? Then that share, which is S, gets multiplied by total costs in the BCR for this species.

Generally, I wouldn’t recommend this approach unless it’s important to create a ranked list of each individual environmental asset. For that purpose, it is probably the best that can be done, but it’s still a somewhat crude approximation. It’s better to rank projects rather than assets (see PD235) and if a project generates multiple benefits, so be it – use one of the approaches in PD243.

Further reading

Jenkins, G.P., Kuo, C.-Y. and Harberger, A.C. (2011). Discounting and alternative investment criteria, Chapter 4 in Cost-Benefit Analysis For Investment DecisionsIDEAS page for this paper.

Office of Best Practice Regulation (2009). Best Practice Regulation Guidance Note, Decision rules in regulatory cost-benefit analysis, Australian Government, Department of Finance and Deregulation,

249 – Ranking environmental projects 15: Maintenance costs

Episode 15 in this series on principles to follow when ranking environmental projects. It is about how to account for the maintenance costs that are required if the benefits generated by the initial project are to be maintained in the long run. 

Often, environmental projects need ongoing funding in the long term to preserve or maintain the benefits generated by an initial project. For example, funds may be needed to maintain, repair, or replace equipment or structures; to pay the wages of people responsible for ongoing education, training or enforcement; or for continuing payments to people to ensure ongoing adoption of improved environmental practices. These costs might arise for a few years beyond the end of the initial project, or they might last more-or-less forever.

The required level of maintenance funding can be substantial, potentially exceeding the cost of the initial project, so it’s an important factor that needs to be accounted for when ranking projects. However, it usually isn’t! I’ve never seen any system for ranking environmental projects that does account for it, other than ones I’ve helped develop.

How should it be included? First, define M as the level of maintenance funding that would be required to fully maintain the project’s benefits in the long term. Because maintenance costs tend to be required for a long time, they need to be discounted before they are added up. If M3, for example, is the maintenance cost in year 3, then the total discounted maintenance cost is given by …


where r is the real discount rate.

If maintenance costs have to be continued into the indefinite future, the question arises, how long should the time frame be for calculating M? There is no clear-cut answer to this. The length of time used for the calculations needs to be fairly long, but if it’s extremely long, discounting means that maintenance costs in the distant future are quite insignificant in the present. Also, uncertainty about what might happen in the distant future is very high, so one might judge that it’s not worth factoring in benefits or costs that may never arise in reality. My suggestion is to use a time frame of around 25 years, although I couldn’t argue strongly against a somewhat shorter or significantly longer time frame.

So with discounted maintenance costs included, the formula for the BCR becomes …


There is one final refinement to make to the BCR formula. I included several risks in the benefits part of the equation (PD241), summarised into one risk, R, in the above equation. Of the four risks I included, one of them may also have an impact on the cost part of the equation. This is Rf, the probability that required maintenance funding will not be forthcoming. “Required” in this context means that most project benefits will be lost in the absence of maintenance funding.

Failing to receive maintenance costs has impacts on two of the cost variables, M and probably E. It’s obvious that if no maintenance costs are received, M would be zero. Therefore, we should weight M by the probability that maintenance costs will be received, (1 – Rf).

Compliance costs might occur during the initial project phase, or during the maintenance phase, or both. The component that would occur in the maintenance phase should also be weighted by the probability that maintenance costs will be received, because if they aren’t received, the project will presumably collapse, and there will be no enforcement of compliance. In the version of the equation below, for simplicity I’ve assumed that all compliance costs occur in the maintenance phase. The equation also includes, for the first time since PD241, all of the risks shown separately.


[Rf also includes the probability that a partner organisation will not deliver essential resources that it agreed to provide, resulting in project failure. I’m assuming that a result of that would be that costs E and M would not be incurred.]

Up to now I’ve used M to represent the full required level of maintenance cost. What if some maintenance funding is likely, but it’s expected to be insufficient to fully maintain project benefits in the long term? You might want to adjust three variables. Firstly, you would reduce M to the expected level of funding. Secondly, you might want to reduce Rf to reflect the fact that obtaining the lower level of maintenance funding is easier. And thirdly, the benefits should be scaled down to some degree (by reducing the estimate of W, or reducing [V(P1) – V(P0)]). How much they should be scaled down depend on how sensitive the benefits are to a reduction in maintenance funding. 

For a good project, providing sufficient maintenance funding is likely to increase benefits by more than it increases costs. If it doesn’t, then it indicates that the proposed investment in maintenance is excessive. 

The equation above is the final new version I’ll show in this series (although there are several posts still to come). This version provides a comprehensive, logical, theoretically respectable, and practical equation for ranking projects. It embodies quite a few simplifications, but none that are likely to have more than a minor adverse impact on the ranking results. It avoids a number of other simplifications (and errors) that are likely to have very serious impacts on the rankings.

I’ll come back and summarise the formula and its components and rationale in the last post of the series. Before then, there are a couple of more issues related to costs to cover, and then some high-level issues to discuss, including uncertainty, the issue of using simplifications, and key mistakes to avoid.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS