Yearly Archives: 2013

254 – Ranking environmental projects 20: Summary

Episode 20, the last in this series on principles to follow when ranking environmental projects. It provides a brief summary of the whole series. 

You can obtain a PDF file with all 20 episodes of this series integrated into one document here.

Around the world, thousands of different quantitative systems have been used to rank environmental projects for funding. It seems that every environmental body creates anew or re-uses at least several such systems each year. Judging from the examples I have examined, most of the systems in use are very poor. The performance of many of them is not much better than choosing projects at random. If only people would be more logical and thorough in their approach to ranking environmental projects! The potential to reduce wastage and improve environmental outcomes is enormous. That’s why I wrote this series.

There are many ways that you can go wrong when putting together a formula to rank projects, and unfortunately the quality of the results is quite sensitive to some of the common errors. Common important mistakes include: weighting and adding variables that should be multiplied; messing up the comparison of outcomes with versus without the project; omitting key benefits variables; ignoring costs; and measuring activity instead of environmental outcomes.

Fortunately, though, it’s not hard to do a pretty good job of project ranking. A bit of theory, some simple logic and a dose of common sense and judgment lead to a set of specific guidelines that are presented in this series. The essential points are as follows.

  1. The core criterion for ranking projects is value for money: a measure of project benefits divided by project-related costs. This is the criterion into which all the variables feed. It’s how you pull everything together to maximise environmental outcomes.
  2. You should rank specific projects, rather than environmental assets. You cannot specify numbers for some of the key variables in the ranking formula without having in mind the particular interventions that will be used.
  3. There are always many different ways of managing an environmental asset, and they can vary greatly in value for money. Therefore, it can be worth evaluating more than one project per asset, especially for large, important environmental assets.
  4. Benefits of a project should be estimated as a difference: with versus without the project, not before versus after the project.
  5. Weak thinking about the “without” scenario for environmental projects is a common failing, sometimes leading to exaggerated estimates of the benefits.
  6. There are two parts to a project’s potential benefits: a change in the physical condition of the environment, and a resulting change in the values generated by the environment (in other words, the value of the change in environmental services).
  7. Those potential benefits usually need to be scaled down to reflect: (a) less than 100% cooperation or compliance by private citizens or other organisations; (b) a variety of project risks; and (c) the time lag between implementing the project and benefits being generated, combined with the cumulative cost of interest on up-front costs (i.e. “discounting” to bring future benefits back to the present).
  8. If in doubt, multiply. That’s a way of saying that benefits tend to be proportional to the variables we’ve talked about (or to one minus risk), and the way to reflect this in the formula is to multiply by the variables, rather than weighting and adding them. Don’t take this too literally, however. You can mess up by multiplying inappropriately too. 
  9. Weighting and adding is relevant only to the values part of the benefits equation (when there are multiple benefits from a project), not to any other part.
  10. Don’t include private benefits as a benefit or voluntary private costs as a cost, but do include involuntary private costs as a cost.
  11. Other costs to include are project cash costs, project in-kind costs, and maintenance costs (after the project is finished). Costs get added up, rather than multiplied.
  12. Uncertainty about project benefits is usually high and should not be ignored. The degree of uncertainty about each project should be considered, at least qualitatively, when projects are being ranked. Also, decisions about projects should not be set in stone, but modified over time as experience and better information is accumulated. Strategies to reduce uncertainty over time should be built into projects (e.g. feasibility assessments, active adaptive management).
  13. Where the cost of all projects that are in contention greatly exceeds the total budget, it is wise and cost-effective to run a simple initial filter over projects to select a smaller number for more detailed assessment. It’s OK to eliminate some projects from contention based on a simple analysis provided that projects are not accepted for funding without being subjected to a more detailed analysis.

There are a number of simplifications in the above advice. Simplifications are essential to make the system workable, but care is needed when selecting which simplifications to use.

In summary, the content and structure of the ranking formula really matters. A lot. A logical and practical formula to use is:



BCR is the Benefit: Cost Ratio,

V(P’) is the value of the environmental asset at benchmark condition P’,

W is the difference in values between P1 (physical condition with the project) and P0 (physical condition without the project) as a proportion of V(P’),

A is the level of adoption/compliance as a proportion of the level needed to achieve the project’s goal,

Rt, Rs, Rf and Rm are the probabilities of the project failing due to technical risk, socio-political risks, financial risks and management risks, respectively,

L is the lag time in years until most benefits of the project are generated,

r is the annual discount rate,

C is the total project cash costs,

K is the total project in-kind costs,

E is total discounted compliance costs, and

M is total discounted maintenance costs.

V can be measured in dollars, or in some other unit that makes sense for the types of projects being ranked. The advantages of using dollars are that it allows you to (a) compare value for money for projects that address completely different types of environmental issues (e.g. river water quality versus threatened species) and (b) assess whether a project’s overall expected benefits exceed its total costs.

For some projects, it works better to calculate potential benefits in a different way: [V(P1) – V(P0)] rather than V(P’) × W. They are equivalent but involve different thought processes.

A simplification that might appeal is to combine all four risks into one overall risk, R. If you do that, also drop ‘× (1 – Rf)’ from the denominator (because you no longer have a separate number for Rf). This simplification makes the formula look a bit less daunting, but it probably doesn’t really save you any work, because you should still consider all four types of risk when coming up with values for R.

This formula works were there is a single type of benefit from a project, or where the V scores for multiple benefits have already been converted into a common currency, such as dollars, and added up. If a project has multiple benefits and you want to account for them individually, replace V(P’) by the weighted sum of the values for each benefit type. For example, if there are three types of benefits, use [z1 × V1(P’) + z2 × V2(P’) + z3 × V3(P’)], where the z’s are the weights. I’m assuming here that the other benefit variables (W, A, R and L) are the same for each benefit type. If that’s not approximately true, you need to adjust the formula further.

One reaction I get is that it all looks too complicated and surely isn’t worth the bother. My response is to ask, if you could double your budget for projects by putting a bit more effort into your project ranking process, would you do so? Of course you would. Doubling the environmental benefits generated from your environmental investments is rather like doubling your budget. If your current ranking system is of the usual questionable quality, doubling the benefits (or more) is readily achievable using the approaches advocated here.

That’s all! Thanks for reading and best of luck with your project ranking endeavours.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

Here is another version of this summary, as published in the Decision Point magazine, in case it is helpful to have a pdf.

253 – Ranking environmental projects 19: Mistakes to avoid

Episode 19 in this series on principles to follow when ranking environmental projects. It describes a number of mistakes that I’ve seen in real-world project ranking systems. Some have been mentioned in previous posts, but most are new.  

Prior posts in this series have mostly focused on things that should be done when ranking environmental projects. Now and then I’ve commented on things that should not be done, but this time that is the main focus. The mistakes I describe here are all things that I’ve seen done in real systems for ranking projects.

Weighting and adding. If you’ve read the whole series, you are probably sick of me saying not to weight and add variables, except in particular circumstances (PD243). I’m saying it one more time because it is such a common mistake, and one with such terrible consequences. I’ve had someone argue that despite all the logic, weighting and adding should be done for all variables because it gives decision makers scope to influence the results to reflect their preferences and values, thereby giving them ownership of the results. Absolute nonsense. That’s like giving people the flexibility to make up their own version of probability theory. There is no benefit in them owning the results if the results are really bad. There are much better ways to give influence to decision makers, such as by allowing them to adjust the value scores (V) to reflect their judgements about what is important. Doing it by weighting and adding together the wrong variables introduces huge errors into the results and greatly reduces the environmental values generated by a program.

Including “value for money” as a criterion separate from the variables that determine value for money. This seems to be quite common too. A number of times I’ve seen systems that ask questions about relevant variables (like environmental threats, adoption, values, risk, costs) but then have a separate question about value for money, rather than calculating value for money based on the other information that has already been collected. This is unfortunate. A subjective, off-the-top-of-the-head judgement about value for money is bound to be much less accurate than calculating it from the relevant variables. This behaviour seems to reveal a lack of insight into what value for money really means. If the aim is to maximise the value of environmental outcomes achieved (as it should be), then value for money is the ultimate criterion into which all the other variables feed. It’s not just one of the criteria; it’s the overarching criterion that pulls everything else together to maximise environmental outcomes.

Here’s a recent experience to illustrate what can go wrong. I was asked to advise an organisation about their equation for ranking projects. They had specified the following as separate criteria for selecting projects: value for money, logical consistency of the project, and likelihood of successful delivery of the project. But, of course, the logical consistency of the project, and the likelihood of successful delivery are both things that would influence the expected value for money from the project. They are not distinct from value for money, they are part of it. I would consider them when specifying the level of risk to include in the equation. Specifically, they determine the level of management risk, Rm (PD241).

Unfortunately, somebody in the organisation who had power but no understanding insisted that logical consistency and successful delivery be treated as criteria at the same level as value for money, and worse still that they all be weighted and added! My explanations and protests were dismissed. As a result, they lost control of their ranking formula. Rankings for small projects were determined almost entirely by the scores given for logical consistency and successful delivery, and barely at all by the Benefit: Cost Ratio (BCR), and the rankings for large projects were the opposite – completely unaffected by logical consistency and successful delivery. (If they’d been multiplied instead of added, it wouldn’t have been so bad.) The ultimate result was poor project rankings, leading to poor environmental outcomes.

Messing up the with-versus-without comparison. Back in PD237 I talked about how the benefits of a project should be measured as the difference in outcomes between a world where the project is implemented and a world where it isn’t ([V(P1) – V(P0)] or W). When you say it like that, it sounds like common sense, so it’s surprising how many systems for ranking projects don’t get this right. Some don’t include any sort of measure of the difference that a project would make. They may use measures representing the importance of the environmental assets, the seriousness of the environmental threats, or the likely level of cooperation from the community, but nothing about the difference in environmental values resulting from the project.

Some systems include a difference, but the wrong difference. I’ve seen a system where the project benefit was estimated as the difference between current asset condition and the predicted asset condition if nothing was done (current versus without). And another which used the difference between current asset condition and predicted asset condition with the project (current versus with). Both wrong.

Finally, I’ve seen a system which did include the correct with-versus-without difference, but still managed to mess it up by also including a couple of inappropriate variables: current asset condition, and the current-versus-without difference. In this situation, more information is not better – it will make the rankings worse.

Omitting key benefits variables. Because the benefits part of the equation is multiplicative, if you miss out one or more of its variables, the inaccuracies that are introduced are likely to be large. If you ignore, say, adoption, and projects vary widely in their levels of adoption, of course it’s going to mean that you make poor decisions.

Ignoring some or all of the costs. Almost all systems ignore maintenance costs. Most ignore compliance costs. Some ignore all costs. Some include costs but don’t divide by them. All mistakes.

Failing to discount future benefits and costs. Another very common mistake – a variation on the theme of ignoring costs.

Measuring activity instead of outcomes. If asked, pretty much everybody involved in ranking environmental projects would say that they want the resources they allocate to achieve the best environmental outcomes. So it’s frustrating to see how often projects are evaluated and ranked on the basis of activity rather than outcomes. For example, benefits are sometimes measured on the basis of the number of participants in a project. This ignores critical factors like the asset values, the effectiveness of the on-ground works, and the project risk. Sometimes this approach arises from a judgement that participation has benefits other than the direct achievement of outcomes. No doubt, this is true to some extent. In particular, participation by community members in a current project can build “social capital” that reduces the cost of achieving environmental outcomes in subsequent projects. In PD252 I recorded my judgement that measuring that particular benefit is probably not worth the trouble in most cases (at least for the purpose of ranking projects). The reasons are that it’s a somewhat complex thing to measure, and that those indirect benefits would usually not be large enough or different enough between projects to affect project rankings much. I’m making a judgement here, of course, but I think it is irrefutable that considering only activity/participation and failing to estimate direct benefits due to improved environmental outcomes is likely to compromise project rankings very seriously. But that does sometimes happen.

Negative scores. This is a really strange one that I don’t expect to see again, but I mention it because it was a catalyst for writing this series. I was once involved in a project ranking process where the organisation was scoring things using an ad hoc points system. Most variables were being scored on a five-point scale: 1 for the worst response through to 5 for the best. The designers of the process decided that they’d penalise projects that were rated “high” or “very high” for risk by extending the range of scores downwards: −5 (for very high risk) to +5 (for very low risk). They were using the dreaded weighted additive formula and, naturally enough, the weighting assigned to risk was relatively high, reflecting their view of its importance. This was in addition to risk having the widest range of scores. They didn’t realise that combining these approaches would greatly amplify the influence of risk, with the result that project rankings depended hugely on risk and not much on anything else. At the meeting, someone from the organisation commented that risk was dominating the ranking, but they couldn’t understand why. Others agreed. I explained what was going on and advised them that their system would have been more transparent and easier to control if they had left the range of scores the same for each variable and just varied the relative weights.

That experience highlighted to me how very little some people who design ranking systems understand about what they are doing. This series is an attempt to provide an accessible and understandable resource so that if people want to do a good job of the ranking process, they can. In the next post I’ll provide a summary of the whole series.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

252 – Ranking environmental projects 18: Simplifications

Episode 18 in this series on principles to follow when ranking environmental projects. It is about simplifications: the necessity of them in the ranking formula, the need for even greater simplification in some cases, and a couple of simplifications I’ve been making implicitly but haven’t previously mentioned. 

Throughout this series, I’ve struck a balance between theoretical accuracy and simplifications to make the process more practical and less costly. Clearly, this balance involves judgement. Others might judge that more or fewer simplifications are appropriate, or prefer different simplifications than the ones I’ve recommended. One thing they would have to agree on, though, is that simplifications are essential to make the system workable. The ones I’ve recommended are carefully chosen, on the basis that they are unlikely to have serious impacts on the total level of environmental benefits generated by a portfolio of projects. In some cases, the careful choosing I’ve done is based not just on subjective judgement, but on numerical analysis.

Even with the simplifications I’ve suggested, the process is still rather information hungry. If dealing with a large number of potential projects, collecting all the information for all of the projects may be more costly than is warranted, especially if the level of funding available is small relative to the total cost of all potential projects. For example, I’ve worked with environmental bodies which had upwards of 500 potential projects to rank, but were likely to get funding for less than 5 per cent of them.

In this type of situation, it is justifiable to use an even more highly simplified approach initially to filter down the full list of 500 projects to a manageable number for more detailed (but still simplified) assessment. An approach I’ve found effective is to select a few of the most important variables (e.g. the importance or significance of the environmental assets affected; the likely technical effectiveness of management actions; the likely degree of cooperation with the project by those people or businesses whose behaviour would need to change). Each project is scored for each of these key variables on a simple three- or four-level scale (low, medium, high, or very high). Then one looks for projects with three scores of high or better. If that doesn’t provide a sufficient number of potential projects, loosen the criterion a bit: look for projects with two scores of high or better and one medium. Loosen or tighten as needed to get a workable number of projects to assess further. Projects that meet the criterion you end up settling on go through more detailed assessment using the BCR equation and the rest of the projects are put aside.

Clearly, with such a simplified process, there is a chance that good projects will be rejected or poor projects will be let through. As long as enough good projects get through the initial filter, missing some good ones is not likely to be a big problem. And as long as the projects that pass through the filter are subjected to the more detailed assessment, letting poor projects through is not a problem at all (apart from wasting some time) because they will be rejected following the detailed analysis.

Now let’s come back to the simplifications included in the detailed BCR calculation. Most of them have been spelled out in previous posts. Key simplifications that I judge to be reasonable in most cases include:

  • Assuming that environmental benefits are linearly related to the proportion of people who adopt the desired new practices or behaviours;
  • Representing project risks as binary variables: success or complete failure;
  • Having only one time lag for all benefits from the project;
  • Approximating the private benefits and voluntary private costs as zero; and
  • Treating the project costs, maintenance costs and compliance costs as if there was only one combined constraint on their availability

There are also a few other simplifications that I haven’t mentioned so far, but which are implicit in the equations I’ve presented in earlier posts. I’ve had the first two of these pointed out by economists with eyes for theoretical detail, and the third by a colleague with a particular interest in this issue.

Firstly, I’ve been assuming that the value of an environmental asset does not depend on the conditions of other related assets. In reality, the benefits of project A could depend of whether project B is funded and, if so, there is no definitive ranking of individual projects. In practice, the error resulting from my simplifying assumption is likely to be small enough to ignore. Pretty much everybody who ranks environmental projects makes this assumption, and ignores any error. But if the issue is judged to be important enough to be worth accounting for, you could define a project that combines the activities of projects A and B into one project and compare it with project A and project B individually.

Secondly, if one assumes that projects are defined at a particular scale and cannot be scaled up or down, then ranking using the BCR may not be accurate because it doesn’t account for the risk of leaving some of the funds unspent. [This is known as the “knapsack problem”.] That’s true, but unless funding is sufficient for only a small number of projects, the loss from ranking using the BCR is likely to be very small. For example, Hajkowicz et al. (2007) estimated losses of between 0.3% and 3% in a particular program. And if you abandon the normally unrealistic assumption that the scale of each project is fixed, then the losses disappear almost entirely. When you factor in the transaction costs of building, solving and explaining a mathematical programming model to solve a knapsack problem properly, you would always rank by BCR.

Thirdly, the equations I’ve presented only measure benefits arising directly from the project. Graham Marshall pointed out that participation in a current project might also generate benefits for future projects by building mutual trust and networks amongst the participants (i.e., “social capital”). He even experimented with simple ways to estimate this benefit so that it could be added to the equation. Unfortunately, the feedback from participants in Graham’s experiments was that accounting for this benefit added significantly to the complexity of the process. Furthermore, my judgement is that, while these are real benefits, they are probably not usually large enough or different enough between projects to make a notable difference to project rankings. For that combination of reasons, I haven’t included them.

Further reading

Hajkowicz, S., Higgins, A., Williams, K., Faith, D.P. and Burton, M. (2007). Optimisation and the selection of conservation contracts, Australian Journal of Agricultural and Resource Economics 51(1), 39-56. Journal web page ♦ IDEAS page

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

251 – Ranking environmental projects 17: Uncertainty

Episode 17 in this series on principles to follow when ranking environmental projects. It is about uncertainty, how to account for it, and what to do about it. 

Uncertainty and knowledge gaps are unavoidable realities when evaluating and ranking environmental projects. The available information is almost always inadequate for confident decision making. Key information gaps often include: the cause-and-effect relationship between management actions and environmental outcomes; the likely behavioural responses of people to the project; and the environmental values resulting from the project – what is important or valuable about the environmental outcomes and how important or valuable are they?

It has been argued to me that uncertainty about the data and objectives is generally so high that it is not worth worrying too much about the procedure used to prioritise projects. Any procedure will do. If that was really true, no analysis could help with decision making – we might as well just draw projects out of a hat.


In fact, while it’s true that uncertainty is usually high, it’s not true that the ranking procedure doesn’t matter, particularly when you consider the outcomes across a portfolio of projects. Even given uncertain data, the overall environmental benefits of a program can be improved substantially by a better decision process. Indeed, environmental benefits appear to be more sensitive to the decision process than to the uncertainty. For example, I have found that there is almost no benefit in reducing data uncertainty if the improved data are used in a poor decision process (Pannell 2009). On the other hand, even if data is uncertain, there are worthwhile benefits to be had from improving the decision process.

This is certainly not to say that uncertainty should be ignored. Once the decision process is fixed up, uncertainty can make an important difference to the delivery of environmental benefits.

There are economic techniques to give negative weight to uncertainty when ranking projects. I’ve used them and I think they are great for research purposes. However, I don’t recommend them for practical project-ranking systems. They aren’t simple to do properly, so they add cost and potentially confusion.

Instead of representing uncertainty explicitly in the ranking equation, I suggest a simpler and more intuitive approach: rating the level of uncertainty for each project; and considering those ratings subjectively when ranking projects (along with information about the Benefit: Cost Ratio, and other relevant considerations).

Apart from its effect on project rankings, another aspect of uncertainty is the question of what, if anything, the organisation should do to reduce it. In my view, it is good for project managers to be explicit about the uncertainty they face, and what they plan to do about it (even if the plan is to do nothing). Simple and practical steps could be to: record significant knowledge gaps; identify the knowledge gaps that matter most through sensitivity analysis (Pannell, 1997); and have an explicit strategy for responding to key knowledge gaps as part of the project, potentially including new research or analysis.

In practice, there is a tendency for environmental decision makers to ignore uncertainty when ranking projects, and to proceed on the basis of best-guess information, even if the best is really poor. In support of that approach, it is often argued that we should not allow lack of knowledge to hold up environmental action, because delays may result in damage that is costly or impossible to reverse. That’s reasonable up to a point, but in my view we are often too cavalier about proceeding with projects when we really have little knowledge of whether they are worthwhile. It may be at the expense of other projects in which we have much more confidence, even though they currently appear to have lower BCRs. It’s not just a question of proceeding with a project or not proceeding – it’s a question of which project to proceed with, considering the uncertainty, environmental benefits and costs for each project. When you realise this, the argument based on not letting uncertainty stand in the way of action is rather diminished.

In some cases, a sensible strategy is to start with a detailed feasibility study or a pilot study, with the intention of learning information that will help with subsequent decision making about whether a full-scale project is worthwhile, and how a full-scale project can best be designed and implemented. A related idea is “active adaptive management”, which involves learning from experience in a directed and systematic way. Implementation efforts get under way, but they are done in a way which is focused on learning.

Particularly for larger projects, my strong view is that one of these approaches should be used. I believe that they have great potential to increase the environmental benefits that are generated. They imply that the initial ranking process should not produce decisions that are set in stone. Decisions may need to be altered once more information is collected. We should be prepared to abandon projects if it turns out that they are not as good as we initially thought, rather than throwing good money after bad.

As far as I’m aware, the sorts of strategies I’m suggesting here are almost never used in real-world environmental programs. Managers are never explicit about the uncertainties they face, there usually isn’t a plan for addressing uncertainty, projects are funded despite profound ignorance about crucial aspects of them, proper feasibility assessments are never done, active adaptive management is almost never used, and ineffective projects that have been started are almost never curtailed so that resources can be redirected to better ones. In these respects, the environment sector is dramatically different from the business world, where people seem  to be much more concerned about whether their investments will actually achieve the desired outcomes. Perhaps the difference is partly because businesses are spending their own money and stand to be the direct beneficiaries if the investment is successful. Perhaps it’s partly about the nature of public policy and politics. Whatever the reason is, I think there is an enormous missed opportunity here to improve environmental outcomes, even without any increase in funding.

Further reading

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16(2), 139-152. On-line version ♦ IDEAS page

Pannell, D.J. (2009). The cost of errors in prioritising projects, INFFER Working Paper 0903, University of Western Australia. Full paper (350K)

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

250 – Ranking environmental projects 16: Other cost issues

Episode 16 in this series on principles to follow when ranking environmental projects. It covers a couple of issues related to costs that didn’t fit in the previous posts. 

Sometimes people criticise the use of the Benefit: Cost Ratio (BCR) to rank projects, on the basis that that it can be manipulated to some extent by moving costs between the denominator and the numerator (e.g. Office of Best Practice Regulation, 2009; Jenkins et al., 2011). For example, suppose you have already calculated an initial BCR for a project, but now you find that there is an additional cost that should be included. You could do one of two things with that cost: you could subtract it from the numerator, resulting in smaller benefits in the BCR, or you could add it to the denominator, resulting in larger costs in the BCR. If benefits exceed costs even after accounting for the new cost, then subtracting the new cost from the numerator would result in a larger BCR than adding it to the denominator. 

vic_view3This criticism of using the BCR for ranking projects reveals a lack of understanding of the logic of the formula. To rank projects correctly, the costs that go in the denominator are the costs that would be drawn from a limited pool of funds. Any costs that are not drawn from a limited pool should, in principle, be subtracted from the numerator, rather than being added to the denominator. It is not correct to move costs arbitrarily between the two. There is a clear logic about which costs go where. It’s surprising how often this misconception is repeated, even by economists.

The second issue relates to the sharing of costs between different benefits. In PD243 I talked about how to assess projects that generate multiple benefits. A related issue I struck once was in a case where the organisation wanted to rank potential investments in a number of threatened species individually, even though they knew that the actions needed to protect one species would help to protect others as well. I can see why they would want to do this – it would be tidy to be able to create a ranked list of all the species.

The approach I suggested to them was to define S as the share of total costs that is attributable to the current species, and base it on the share of benefits. You would add up all the benefits for different species resulting from the actions taken to protect this species, and then ask what share of the benefits belongs to this species? Then that share, which is S, gets multiplied by total costs in the BCR for this species.

Generally, I wouldn’t recommend this approach unless it’s important to create a ranked list of each individual environmental asset. For that purpose, it is probably the best that can be done, but it’s still a somewhat crude approximation. It’s better to rank projects rather than assets (see PD235) and if a project generates multiple benefits, so be it – use one of the approaches in PD243.

Further reading

Jenkins, G.P., Kuo, C.-Y. and Harberger, A.C. (2011). Discounting and alternative investment criteria, Chapter 4 in Cost-Benefit Analysis For Investment DecisionsIDEAS page for this paper.

Office of Best Practice Regulation (2009). Best Practice Regulation Guidance Note, Decision rules in regulatory cost-benefit analysis, Australian Government, Department of Finance and Deregulation,