251 – Ranking environmental projects 17: Uncertainty
Episode 17 in this series on principles to follow when ranking environmental projects. It is about uncertainty, how to account for it, and what to do about it.
Uncertainty and knowledge gaps are unavoidable realities when evaluating and ranking environmental projects. The available information is almost always inadequate for confident decision making. Key information gaps often include: the cause-and-effect relationship between management actions and environmental outcomes; the likely behavioural responses of people to the project; and the environmental values resulting from the project – what is important or valuable about the environmental outcomes and how important or valuable are they?
It has been argued to me that uncertainty about the data and objectives is generally so high that it is not worth worrying too much about the procedure used to prioritise projects. Any procedure will do. If that was really true, no analysis could help with decision making – we might as well just draw projects out of a hat.
In fact, while it’s true that uncertainty is usually high, it’s not true that the ranking procedure doesn’t matter, particularly when you consider the outcomes across a portfolio of projects. Even given uncertain data, the overall environmental benefits of a program can be improved substantially by a better decision process. Indeed, environmental benefits appear to be more sensitive to the decision process than to the uncertainty. For example, I have found that there is almost no benefit in reducing data uncertainty if the improved data are used in a poor decision process (Pannell 2009). On the other hand, even if data is uncertain, there are worthwhile benefits to be had from improving the decision process.
This is certainly not to say that uncertainty should be ignored. Once the decision process is fixed up, uncertainty can make an important difference to the delivery of environmental benefits.
There are economic techniques to give negative weight to uncertainty when ranking projects. I’ve used them and I think they are great for research purposes. However, I don’t recommend them for practical project-ranking systems. They aren’t simple to do properly, so they add cost and potentially confusion.
Instead of representing uncertainty explicitly in the ranking equation, I suggest a simpler and more intuitive approach: rating the level of uncertainty for each project; and considering those ratings subjectively when ranking projects (along with information about the Benefit: Cost Ratio, and other relevant considerations).
Apart from its effect on project rankings, another aspect of uncertainty is the question of what, if anything, the organisation should do to reduce it. In my view, it is good for project managers to be explicit about the uncertainty they face, and what they plan to do about it (even if the plan is to do nothing). Simple and practical steps could be to: record significant knowledge gaps; identify the knowledge gaps that matter most through sensitivity analysis (Pannell, 1997); and have an explicit strategy for responding to key knowledge gaps as part of the project, potentially including new research or analysis.
In practice, there is a tendency for environmental decision makers to ignore uncertainty when ranking projects, and to proceed on the basis of best-guess information, even if the best is really poor. In support of that approach, it is often argued that we should not allow lack of knowledge to hold up environmental action, because delays may result in damage that is costly or impossible to reverse. That’s reasonable up to a point, but in my view we are often too cavalier about proceeding with projects when we really have little knowledge of whether they are worthwhile. It may be at the expense of other projects in which we have much more confidence, even though they currently appear to have lower BCRs. It’s not just a question of proceeding with a project or not proceeding – it’s a question of which project to proceed with, considering the uncertainty, environmental benefits and costs for each project. When you realise this, the argument based on not letting uncertainty stand in the way of action is rather diminished.
In some cases, a sensible strategy is to start with a detailed feasibility study or a pilot study, with the intention of learning information that will help with subsequent decision making about whether a full-scale project is worthwhile, and how a full-scale project can best be designed and implemented. A related idea is “active adaptive management”, which involves learning from experience in a directed and systematic way. Implementation efforts get under way, but they are done in a way which is focused on learning.
Particularly for larger projects, my strong view is that one of these approaches should be used. I believe that they have great potential to increase the environmental benefits that are generated. They imply that the initial ranking process should not produce decisions that are set in stone. Decisions may need to be altered once more information is collected. We should be prepared to abandon projects if it turns out that they are not as good as we initially thought, rather than throwing good money after bad.
As far as I’m aware, the sorts of strategies I’m suggesting here are almost never used in real-world environmental programs. Managers are never explicit about the uncertainties they face, there usually isn’t a plan for addressing uncertainty, projects are funded despite profound ignorance about crucial aspects of them, proper feasibility assessments are never done, active adaptive management is almost never used, and ineffective projects that have been started are almost never curtailed so that resources can be redirected to better ones. In these respects, the environment sector is dramatically different from the business world, where people seem to be much more concerned about whether their investments will actually achieve the desired outcomes. Perhaps the difference is partly because businesses are spending their own money and stand to be the direct beneficiaries if the investment is successful. Perhaps it’s partly about the nature of public policy and politics. Whatever the reason is, I think there is an enormous missed opportunity here to improve environmental outcomes, even without any increase in funding.
Pannell, D.J. (2009). The cost of errors in prioritising projects, INFFER Working Paper 0903, University of Western Australia. Full paper (350K)
Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research (forthcoming). Journal web page ♦ Pre-publication version at IDEAS