Monthly Archives: September 2013

255 – Science communication: The Matrix

Here is a wonderful example of how to communicate a fairly dry scientific concept in a way that is clear, engaging and entertaining.

It’s a video created by Don Driscoll, an ecologist from the Australian National University (and a co-member with me and others of the Environmental Decisions Group). The topic is “the matrix” – the areas that surround patches of remnant native vegetation. You mightn’t think this sounds like a promising way to spend four minutes of your time, but check it out. I think you’ll enjoy it.

Don’s done a wonderful job. He created it on his kitchen table at home over the course of a couple of weeks, to the consternation of his family! Because it’s an animation, it required a lot of painstaking work to put it together. Just as impressive, though, is the creativity that went into the story line and the script. If the topic was a bit more sexy, I reckon it would go viral. Look out Psy.

Watch it at YouTube here, or click below.

254 – Ranking environmental projects 20: Summary

Episode 20, the last in this series on principles to follow when ranking environmental projects. It provides a brief summary of the whole series. 

You can obtain a PDF file with all 20 episodes of this series integrated into one document here.

Around the world, thousands of different quantitative systems have been used to rank environmental projects for funding. It seems that every environmental body creates anew or re-uses at least several such systems each year. Judging from the examples I have examined, most of the systems in use are very poor. The performance of many of them is not much better than choosing projects at random. If only people would be more logical and thorough in their approach to ranking environmental projects! The potential to reduce wastage and improve environmental outcomes is enormous. That’s why I wrote this series.

There are many ways that you can go wrong when putting together a formula to rank projects, and unfortunately the quality of the results is quite sensitive to some of the common errors. Common important mistakes include: weighting and adding variables that should be multiplied; messing up the comparison of outcomes with versus without the project; omitting key benefits variables; ignoring costs; and measuring activity instead of environmental outcomes.

Fortunately, though, it’s not hard to do a pretty good job of project ranking. A bit of theory, some simple logic and a dose of common sense and judgment lead to a set of specific guidelines that are presented in this series. The essential points are as follows.

  1. The core criterion for ranking projects is value for money: a measure of project benefits divided by project-related costs. This is the criterion into which all the variables feed. It’s how you pull everything together to maximise environmental outcomes.
  2. You should rank specific projects, rather than environmental assets. You cannot specify numbers for some of the key variables in the ranking formula without having in mind the particular interventions that will be used.
  3. There are always many different ways of managing an environmental asset, and they can vary greatly in value for money. Therefore, it can be worth evaluating more than one project per asset, especially for large, important environmental assets.
  4. Benefits of a project should be estimated as a difference: with versus without the project, not before versus after the project.
  5. Weak thinking about the “without” scenario for environmental projects is a common failing, sometimes leading to exaggerated estimates of the benefits.
  6. There are two parts to a project’s potential benefits: a change in the physical condition of the environment, and a resulting change in the values generated by the environment (in other words, the value of the change in environmental services).
  7. Those potential benefits usually need to be scaled down to reflect: (a) less than 100% cooperation or compliance by private citizens or other organisations; (b) a variety of project risks; and (c) the time lag between implementing the project and benefits being generated, combined with the cumulative cost of interest on up-front costs (i.e. “discounting” to bring future benefits back to the present).
  8. If in doubt, multiply. That’s a way of saying that benefits tend to be proportional to the variables we’ve talked about (or to one minus risk), and the way to reflect this in the formula is to multiply by the variables, rather than weighting and adding them. Don’t take this too literally, however. You can mess up by multiplying inappropriately too. 
  9. Weighting and adding is relevant only to the values part of the benefits equation (when there are multiple benefits from a project), not to any other part.
  10. Don’t include private benefits as a benefit or voluntary private costs as a cost, but do include involuntary private costs as a cost.
  11. Other costs to include are project cash costs, project in-kind costs, and maintenance costs (after the project is finished). Costs get added up, rather than multiplied.
  12. Uncertainty about project benefits is usually high and should not be ignored. The degree of uncertainty about each project should be considered, at least qualitatively, when projects are being ranked. Also, decisions about projects should not be set in stone, but modified over time as experience and better information is accumulated. Strategies to reduce uncertainty over time should be built into projects (e.g. feasibility assessments, active adaptive management).
  13. Where the cost of all projects that are in contention greatly exceeds the total budget, it is wise and cost-effective to run a simple initial filter over projects to select a smaller number for more detailed assessment. It’s OK to eliminate some projects from contention based on a simple analysis provided that projects are not accepted for funding without being subjected to a more detailed analysis.

There are a number of simplifications in the above advice. Simplifications are essential to make the system workable, but care is needed when selecting which simplifications to use.

In summary, the content and structure of the ranking formula really matters. A lot. A logical and practical formula to use is:

pd248e3

where

BCR is the Benefit: Cost Ratio,

V(P’) is the value of the environmental asset at benchmark condition P’,

W is the difference in values between P1 (physical condition with the project) and P0 (physical condition without the project) as a proportion of V(P’),

A is the level of adoption/compliance as a proportion of the level needed to achieve the project’s goal,

Rt, Rs, Rf and Rm are the probabilities of the project failing due to technical risk, socio-political risks, financial risks and management risks, respectively,

L is the lag time in years until most benefits of the project are generated,

r is the annual discount rate,

C is the total project cash costs,

K is the total project in-kind costs,

E is total discounted compliance costs, and

M is total discounted maintenance costs.

V can be measured in dollars, or in some other unit that makes sense for the types of projects being ranked. The advantages of using dollars are that it allows you to (a) compare value for money for projects that address completely different types of environmental issues (e.g. river water quality versus threatened species) and (b) assess whether a project’s overall expected benefits exceed its total costs.

For some projects, it works better to calculate potential benefits in a different way: [V(P1) – V(P0)] rather than V(P’) × W. They are equivalent but involve different thought processes.

A simplification that might appeal is to combine all four risks into one overall risk, R. If you do that, also drop ‘× (1 – Rf)’ from the denominator (because you no longer have a separate number for Rf). This simplification makes the formula look a bit less daunting, but it probably doesn’t really save you any work, because you should still consider all four types of risk when coming up with values for R.

This formula works were there is a single type of benefit from a project, or where the V scores for multiple benefits have already been converted into a common currency, such as dollars, and added up. If a project has multiple benefits and you want to account for them individually, replace V(P’) by the weighted sum of the values for each benefit type. For example, if there are three types of benefits, use [z1 × V1(P’) + z2 × V2(P’) + z3 × V3(P’)], where the z’s are the weights. I’m assuming here that the other benefit variables (W, A, R and L) are the same for each benefit type. If that’s not approximately true, you need to adjust the formula further.

One reaction I get is that it all looks too complicated and surely isn’t worth the bother. My response is to ask, if you could double your budget for projects by putting a bit more effort into your project ranking process, would you do so? Of course you would. Doubling the environmental benefits generated from your environmental investments is rather like doubling your budget. If your current ranking system is of the usual questionable quality, doubling the benefits (or more) is readily achievable using the approaches advocated here.

That’s all! Thanks for reading and best of luck with your project ranking endeavours.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS

Here is another version of this summary, as published in the Decision Point magazine, in case it is helpful to have a pdf.

253 – Ranking environmental projects 19: Mistakes to avoid

Episode 19 in this series on principles to follow when ranking environmental projects. It describes a number of mistakes that I’ve seen in real-world project ranking systems. Some have been mentioned in previous posts, but most are new.  

Prior posts in this series have mostly focused on things that should be done when ranking environmental projects. Now and then I’ve commented on things that should not be done, but this time that is the main focus. The mistakes I describe here are all things that I’ve seen done in real systems for ranking projects.

Weighting and adding. If you’ve read the whole series, you are probably sick of me saying not to weight and add variables, except in particular circumstances (PD243). I’m saying it one more time because it is such a common mistake, and one with such terrible consequences. I’ve had someone argue that despite all the logic, weighting and adding should be done for all variables because it gives decision makers scope to influence the results to reflect their preferences and values, thereby giving them ownership of the results. Absolute nonsense. That’s like giving people the flexibility to make up their own version of probability theory. There is no benefit in them owning the results if the results are really bad. There are much better ways to give influence to decision makers, such as by allowing them to adjust the value scores (V) to reflect their judgements about what is important. Doing it by weighting and adding together the wrong variables introduces huge errors into the results and greatly reduces the environmental values generated by a program.

Including “value for money” as a criterion separate from the variables that determine value for money. This seems to be quite common too. A number of times I’ve seen systems that ask questions about relevant variables (like environmental threats, adoption, values, risk, costs) but then have a separate question about value for money, rather than calculating value for money based on the other information that has already been collected. This is unfortunate. A subjective, off-the-top-of-the-head judgement about value for money is bound to be much less accurate than calculating it from the relevant variables. This behaviour seems to reveal a lack of insight into what value for money really means. If the aim is to maximise the value of environmental outcomes achieved (as it should be), then value for money is the ultimate criterion into which all the other variables feed. It’s not just one of the criteria; it’s the overarching criterion that pulls everything else together to maximise environmental outcomes.

Here’s a recent experience to illustrate what can go wrong. I was asked to advise an organisation about their equation for ranking projects. They had specified the following as separate criteria for selecting projects: value for money, logical consistency of the project, and likelihood of successful delivery of the project. But, of course, the logical consistency of the project, and the likelihood of successful delivery are both things that would influence the expected value for money from the project. They are not distinct from value for money, they are part of it. I would consider them when specifying the level of risk to include in the equation. Specifically, they determine the level of management risk, Rm (PD241).

Unfortunately, somebody in the organisation who had power but no understanding insisted that logical consistency and successful delivery be treated as criteria at the same level as value for money, and worse still that they all be weighted and added! My explanations and protests were dismissed. As a result, they lost control of their ranking formula. Rankings for small projects were determined almost entirely by the scores given for logical consistency and successful delivery, and barely at all by the Benefit: Cost Ratio (BCR), and the rankings for large projects were the opposite – completely unaffected by logical consistency and successful delivery. (If they’d been multiplied instead of added, it wouldn’t have been so bad.) The ultimate result was poor project rankings, leading to poor environmental outcomes.

Messing up the with-versus-without comparison. Back in PD237 I talked about how the benefits of a project should be measured as the difference in outcomes between a world where the project is implemented and a world where it isn’t ([V(P1) – V(P0)] or W). When you say it like that, it sounds like common sense, so it’s surprising how many systems for ranking projects don’t get this right. Some don’t include any sort of measure of the difference that a project would make. They may use measures representing the importance of the environmental assets, the seriousness of the environmental threats, or the likely level of cooperation from the community, but nothing about the difference in environmental values resulting from the project.

Some systems include a difference, but the wrong difference. I’ve seen a system where the project benefit was estimated as the difference between current asset condition and the predicted asset condition if nothing was done (current versus without). And another which used the difference between current asset condition and predicted asset condition with the project (current versus with). Both wrong.

Finally, I’ve seen a system which did include the correct with-versus-without difference, but still managed to mess it up by also including a couple of inappropriate variables: current asset condition, and the current-versus-without difference. In this situation, more information is not better – it will make the rankings worse.

Omitting key benefits variables. Because the benefits part of the equation is multiplicative, if you miss out one or more of its variables, the inaccuracies that are introduced are likely to be large. If you ignore, say, adoption, and projects vary widely in their levels of adoption, of course it’s going to mean that you make poor decisions.

Ignoring some or all of the costs. Almost all systems ignore maintenance costs. Most ignore compliance costs. Some ignore all costs. Some include costs but don’t divide by them. All mistakes.

Failing to discount future benefits and costs. Another very common mistake – a variation on the theme of ignoring costs.

Measuring activity instead of outcomes. If asked, pretty much everybody involved in ranking environmental projects would say that they want the resources they allocate to achieve the best environmental outcomes. So it’s frustrating to see how often projects are evaluated and ranked on the basis of activity rather than outcomes. For example, benefits are sometimes measured on the basis of the number of participants in a project. This ignores critical factors like the asset values, the effectiveness of the on-ground works, and the project risk. Sometimes this approach arises from a judgement that participation has benefits other than the direct achievement of outcomes. No doubt, this is true to some extent. In particular, participation by community members in a current project can build “social capital” that reduces the cost of achieving environmental outcomes in subsequent projects. In PD252 I recorded my judgement that measuring that particular benefit is probably not worth the trouble in most cases (at least for the purpose of ranking projects). The reasons are that it’s a somewhat complex thing to measure, and that those indirect benefits would usually not be large enough or different enough between projects to affect project rankings much. I’m making a judgement here, of course, but I think it is irrefutable that considering only activity/participation and failing to estimate direct benefits due to improved environmental outcomes is likely to compromise project rankings very seriously. But that does sometimes happen.

Negative scores. This is a really strange one that I don’t expect to see again, but I mention it because it was a catalyst for writing this series. I was once involved in a project ranking process where the organisation was scoring things using an ad hoc points system. Most variables were being scored on a five-point scale: 1 for the worst response through to 5 for the best. The designers of the process decided that they’d penalise projects that were rated “high” or “very high” for risk by extending the range of scores downwards: −5 (for very high risk) to +5 (for very low risk). They were using the dreaded weighted additive formula and, naturally enough, the weighting assigned to risk was relatively high, reflecting their view of its importance. This was in addition to risk having the widest range of scores. They didn’t realise that combining these approaches would greatly amplify the influence of risk, with the result that project rankings depended hugely on risk and not much on anything else. At the meeting, someone from the organisation commented that risk was dominating the ranking, but they couldn’t understand why. Others agreed. I explained what was going on and advised them that their system would have been more transparent and easier to control if they had left the range of scores the same for each variable and just varied the relative weights.

That experience highlighted to me how very little some people who design ranking systems understand about what they are doing. This series is an attempt to provide an accessible and understandable resource so that if people want to do a good job of the ranking process, they can. In the next post I’ll provide a summary of the whole series.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS