Episode 7 in this series on principles to follow when ranking environmental projects. It is about how to account for the risks that any given project may fail.
Environmental projects don’t always go smoothly. There are various things that can stop them from delivering their intended benefits. One is the failure of enough people to change their behaviour in the desired ways, as discussed in PD240. In this post, I’ll discuss a number of additional risks that may affect projects. They need to be accounted for when ranking because projects vary greatly in how risky they are.
The word “risk” is used in so many different ways that it’s important to be clear about what I mean by it. The risks I’m talking about here are things that might stop the project from succeeding, not the risks to the environmental assets (the threats that are making, or might make, the environmental assets degrade). I’ll make some comments about those latter risks at the end of the post.
There are various types of project risks, potentially including:
- Technical risk (Rt): the probability that the project will fail to deliver outcomes for technical reasons. Management actions are implemented but they don’t work because something breaks, or newly planted vegetation dies, or there was a miscalculation when designing the actions, or there is some sort of natural event that makes the actions ineffective.
- Social/political risk (Rs): the probability that social or political factors will prevent project success. For example, a project might rely on another government agency to enforce existing environmental regulations, but that agency is not prepared to enforce them because of the likelihood of a political controversy. Or there might be community protest, or perhaps even legal action, to stop the project.
- Financial risk (Rf): the probability that essential funding from partner organisations, or long-term funding for maintenance of benefits, will not be forthcoming. The latter one is often neglected. Many projects require ongoing funding for physical maintenance, or for continuing education or enforcement, without which the benefits would be lost. Often the decision to provide this ongoing funding is made independently of the decision to fund an initial project, so it is risky from the perspective of the funders of the initial project.
- Management risk (Rm): if different projects will be managed by different organisations, then there are likely to be differences in the risk of failure related to management. These risks might include poor governance arrangements, poor relationships with partners, poor capacity of staff in the organisation, poor specification of milestones and timelines, or poor project leadership.
All four of these risks can be important and are worth accounting for.
Some of these risks relate to all-or-nothing outcomes (e.g. there either is successful legal action against the project or there isn’t), while others relate to continuous variables (e.g. maintenance funding might be deficient but not zero, resulting in some reduced level of ongoing benefits).
Representing risks for continuous variables is possible, but it requires fairly detailed information. Given that we are making educated guesses when we specify these risks, going to that level of detail is probably not warranted. What I suggest is to approximate each of the risks as the probability of a binary (all-or-nothing) variable turning out badly. To illustrate, rather than trying to specify probabilities for each possible level of maintenance funding, we would just specify the probability of maintenance funding being so low that most of the benefits would be lost. We would assume that there are two possible outcomes for each risk: it causes the project to fail, or the project is fully successful (or, at least, as successful as the other factors allow it to be).
Some risks might be correlated. For example, if there is social or political resistance to a project, it might reduce the probability of it getting long-term maintenance funding. In theory we should account for this correlation too, but again my view is that it is not worth going to that level of detail. Reasons include that: the quality of information we have when specifying these risks is not high; the formula used for ranking projects would have to get pretty complicated; and it would be confusing to many people.
Given those simplifications, the expected benefits of a project are proportional to the probability of the project NOT failing (1 minus the risk), for each of the separate risks. Again, proportional means multiplying, so:
Expected benefit = [V(P1) – V(P0)] × A × (1 – Rt) × (1 – Rs) × (1 – Rf) × (1 – Rm)
Expected benefit = V(P’) × W × A × (1 – Rt) × (1 – Rs) × (1 – Rf) × (1 – Rm)
With these variables included, the benefits are now probability-weighted, so they are “expected” benefits, in the statistical sense of a weighted average, where the “weights” are the probabilities of success (1 minus the probability of failure).
If you wanted to further simplify the approach, you could potentially combine all four of the risks into a single risk variable (R) representing the joint probability of the project failing for any of the four reasons.
Expected benefit = [V(P1) – V(P0)] × A × (1 – R)
That has the advantage of simplicity. Its disadvantage is that the individual risks tend to get a bit lost, and perhaps under-estimated, in the combined risk variable. In my view, it’s worth taking the time to think separately about each of the risks, and if you do, you may as well have a variable for each.
Some organisations like to break risks down into likelihoods and consequences (as suggested in the ISO 31000 Risk Management Standard). Likelihoods represent the probability that a bad thing will happen (often scored on a scale like this: almost certain, likely, possible, unlikely or very unlikely), and consequence means how bad the bad thing would be if it did happen (e.g., scored as insignificant, minor, moderate, major or catastrophic). Depending on the combination of these two scores, the overall risk is assessed as minimal, low, medium, high or extreme.
This is a rather drawn-out way of getting a risk score (compared with just stating the probability of project failure), and I don’t think it’s necessary, but it is logical and may help people to think clearly about the issues.
If you take that approach, a key question is, how should the overall risk score (minimal through to extreme) be used in the project ranking process? My recommendation is that you convert it into a probability of project failure. For example, you might specify that minimal risk corresponds to a 0.05 probability of project failure, low is 0.1, medium is 0.2, high is 0.5 and extreme is 0.8.
Having done that, you should use the probability in the equations I’ve given above. What you definitely should not do is give the risk number a weighting and add it onto (or subtract it from) the rest of the equation, but I’ve seen that done! Doing that implies that the losses from a poor outcome for a tiny project are just the same as for an enormous project, which is obviously wrong.
Finally, I want to return to a different usage of the word “risk”, to mean a threat to the environmental asset. Environmental organisations sometimes conduct “risk assessments” in which they try to quantify the likely future extent of degradation from particular causes. In this series, we already dealt with that aspect of risk in PD237; it represents the difference between current environmental condition and future condition without the proposed project ((1) – (3) in Figure 1).
A concern with this sort of risk assessment is that it may distract attention away from the correct measure of project benefits. Having done a “risk assessment” and come up with estimates of (1) – (3), people seem to find it difficult not to include them in the ranking formula. However, the correct measure of project benefits is (2) – (3), and if you include (2) – (3), also including (1) – (3) can only make the rankings worse. The point is that this type of “risk assessment” only provides the “without” half of the information you need to estimate potential project benefits. You also need to know what would happen “with” the project.
Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research 40(2), 126-133. Journal web page ♦ Pre-publication version at IDEAS