Economics, Environment, Natural resource management

253 – Ranking environmental projects 19: Mistakes to avoid

Episode 19 in this series on principles to follow when ranking environmental projects. It describes a number of mistakes that I’ve seen in real-world project ranking systems. Some have been mentioned in previous posts, but most are new.  

Prior posts in this series have mostly focused on things that should be done when ranking environmental projects. Now and then I’ve commented on things that should not be done, but this time that is the main focus. The mistakes I describe here are all things that I’ve seen done in real systems for ranking projects.

Weighting and adding. If you’ve read the whole series, you are probably sick of me saying not to weight and add variables, except in particular circumstances (PD243). I’m saying it one more time because it is such a common mistake, and one with such terrible consequences. I’ve had someone argue that despite all the logic, weighting and adding should be done for all variables because it gives decision makers scope to influence the results to reflect their preferences and values, thereby giving them ownership of the results. Absolute nonsense. That’s like giving people the flexibility to make up their own version of probability theory. There is no benefit in them owning the results if the results are really bad. There are much better ways to give influence to decision makers, such as by allowing them to adjust the value scores (V) to reflect their judgements about what is important. Doing it by weighting and adding together the wrong variables introduces huge errors into the results and greatly reduces the environmental values generated by a program.

Including “value for money” as a criterion separate from the variables that determine value for money. This seems to be quite common too. A number of times I’ve seen systems that ask questions about relevant variables (like environmental threats, adoption, values, risk, costs) but then have a separate question about value for money, rather than calculating value for money based on the other information that has already been collected. This is unfortunate. A subjective, off-the-top-of-the-head judgement about value for money is bound to be much less accurate than calculating it from the relevant variables. This behaviour seems to reveal a lack of insight into what value for money really means. If the aim is to maximise the value of environmental outcomes achieved (as it should be), then value for money is the ultimate criterion into which all the other variables feed. It’s not just one of the criteria; it’s the overarching criterion that pulls everything else together to maximise environmental outcomes.

Here’s a recent experience to illustrate what can go wrong. I was asked to advise an organisation about their equation for ranking projects. They had specified the following as separate criteria for selecting projects: value for money, logical consistency of the project, and likelihood of successful delivery of the project. But, of course, the logical consistency of the project, and the likelihood of successful delivery are both things that would influence the expected value for money from the project. They are not distinct from value for money, they are part of it. I would consider them when specifying the level of risk to include in the equation. Specifically, they determine the level of management risk, Rm (PD241).

Unfortunately, somebody in the organisation who had power but no understanding insisted that logical consistency and successful delivery be treated as criteria at the same level as value for money, and worse still that they all be weighted and added! My explanations and protests were dismissed. As a result, they lost control of their ranking formula. Rankings for small projects were determined almost entirely by the scores given for logical consistency and successful delivery, and barely at all by the Benefit: Cost Ratio (BCR), and the rankings for large projects were the opposite – completely unaffected by logical consistency and successful delivery. (If they’d been multiplied instead of added, it wouldn’t have been so bad.) The ultimate result was poor project rankings, leading to poor environmental outcomes.

Messing up the with-versus-without comparison. Back in PD237 I talked about how the benefits of a project should be measured as the difference in outcomes between a world where the project is implemented and a world where it isn’t ([V(P1) – V(P0)] or W). When you say it like that, it sounds like common sense, so it’s surprising how many systems for ranking projects don’t get this right. Some don’t include any sort of measure of the difference that a project would make. They may use measures representing the importance of the environmental assets, the seriousness of the environmental threats, or the likely level of cooperation from the community, but nothing about the difference in environmental values resulting from the project.

Some systems include a difference, but the wrong difference. I’ve seen a system where the project benefit was estimated as the difference between current asset condition and the predicted asset condition if nothing was done (current versus without). And another which used the difference between current asset condition and predicted asset condition with the project (current versus with). Both wrong.

Finally, I’ve seen a system which did include the correct with-versus-without difference, but still managed to mess it up by also including a couple of inappropriate variables: current asset condition, and the current-versus-without difference. In this situation, more information is not better – it will make the rankings worse.

Omitting key benefits variables. Because the benefits part of the equation is multiplicative, if you miss out one or more of its variables, the inaccuracies that are introduced are likely to be large. If you ignore, say, adoption, and projects vary widely in their levels of adoption, of course it’s going to mean that you make poor decisions.

Ignoring some or all of the costs. Almost all systems ignore maintenance costs. Most ignore compliance costs. Some ignore all costs. Some include costs but don’t divide by them. All mistakes.

Failing to discount future benefits and costs. Another very common mistake – a variation on the theme of ignoring costs.

Measuring activity instead of outcomes. If asked, pretty much everybody involved in ranking environmental projects would say that they want the resources they allocate to achieve the best environmental outcomes. So it’s frustrating to see how often projects are evaluated and ranked on the basis of activity rather than outcomes. For example, benefits are sometimes measured on the basis of the number of participants in a project. This ignores critical factors like the asset values, the effectiveness of the on-ground works, and the project risk. Sometimes this approach arises from a judgement that participation has benefits other than the direct achievement of outcomes. No doubt, this is true to some extent. In particular, participation by community members in a current project can build “social capital” that reduces the cost of achieving environmental outcomes in subsequent projects. In PD252 I recorded my judgement that measuring that particular benefit is probably not worth the trouble in most cases (at least for the purpose of ranking projects). The reasons are that it’s a somewhat complex thing to measure, and that those indirect benefits would usually not be large enough or different enough between projects to affect project rankings much. I’m making a judgement here, of course, but I think it is irrefutable that considering only activity/participation and failing to estimate direct benefits due to improved environmental outcomes is likely to compromise project rankings very seriously. But that does sometimes happen.

Negative scores. This is a really strange one that I don’t expect to see again, but I mention it because it was a catalyst for writing this series. I was once involved in a project ranking process where the organisation was scoring things using an ad hoc points system. Most variables were being scored on a five-point scale: 1 for the worst response through to 5 for the best. The designers of the process decided that they’d penalise projects that were rated “high” or “very high” for risk by extending the range of scores downwards: −5 (for very high risk) to +5 (for very low risk). They were using the dreaded weighted additive formula and, naturally enough, the weighting assigned to risk was relatively high, reflecting their view of its importance. This was in addition to risk having the widest range of scores. They didn’t realise that combining these approaches would greatly amplify the influence of risk, with the result that project rankings depended hugely on risk and not much on anything else. At the meeting, someone from the organisation commented that risk was dominating the ranking, but they couldn’t understand why. Others agreed. I explained what was going on and advised them that their system would have been more transparent and easier to control if they had left the range of scores the same for each variable and just varied the relative weights.

That experience highlighted to me how very little some people who design ranking systems understand about what they are doing. This series is an attempt to provide an accessible and understandable resource so that if people want to do a good job of the ranking process, they can. In the next post I’ll provide a summary of the whole series.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS