Monthly Archives: June 2013

243 – Ranking environmental projects 9: Multiple benefits

Episode 9 in this series on principles to follow when ranking environmental projects. It is about how to account for account for multiple types of benefits from the same project.

Previous posts in this series have assumed that there is only one type of benefit being generated, or that multiple benefits generated by a project have already been converted into a common currency, such as dollars, and added up. What if we have multiple benefits and we want to account for them individually?

Suppose that a single project will generate three types of benefits, due to improvements to three different, but connected, environmental assets (e.g. a threatened species, a river that is suffering from reduced water quality, and an area of riparian vegetation that is attractive and provides habitat). The three assets have independent values V1, V2 and V3.

All of the variables we’ve talked about in this series (the effectiveness of works, adoption, time lags and the various risks) could potentially have different values for each of the environmental benefits generated by this one project. For example, if the project is implemented, the risk of failing to improve water quality might be higher than the risk of failing to improve the condition of the riparian vegetation.

If the differences were significant enough, we might think it would be worth estimating three different values for each of the variables: W, A, L and the various Rs.

If the Vs were each measured in dollars, the expected benefits for the project would simply be the sum of the formulas for each benefit, as follows. [Three lots of the formula from PD242.]

Expected benefit = V1(P’) × W1 × A1 × (1 – R1) / (1+r)L1 +

V2(P’) × W2 × A2 × (1 – R2) / (1+r)L2 +

V3(P’) × W3 × A3 × (1 – R3) / (1+r)L3

To keep the formula simple, I’ve assumed that the relationship between adoption and benefits is linear, and I’ve combined the various risks into one overall probability of failure. Also, from now on I’m mostly going to use the V(P’) × W version of the formula, rather than the [V(P1) – V(P0)] version, which is equivalent. This has advantages in shaping the thinking, as outlined in PD239, but also advantages for simplifying the formula where there are multiple benefits, as you’ll see later.

If the values are not measured in money terms, you’ll need to provide weights (z1, z2 and z3) to indicate the relative importance of the different benefits. The formula becomes:

Expected benefit = z1 × V1(P’) × W1 × A1 × (1 – R1) / (1+r)L1 +

z2 × V2(P’) × W2 × A2 × (1 – R2) / (1+r)L2 +

z3 × V3(P’) × W3 × A3 × (1 – R3) / (1+r)L3

This formula is getting pretty big and ugly. It also implies the need for a lot of information: the full equation for each type of benefit. Based on my experience, I’d say that most managers of real-world programs would not be prepared to go to this much detail. In reality, what commonly happens is that some of the variables are assumed to be the same for the different types of benefits. Often, I think that’s a reasonable approximation of reality, or at least one that’s not so bad that it’s worth fighting against. If it seems reasonable to assume that W, A, R and L are the same for all three benefit types, then we can simplify the equation for expected benefits, as follows.

Expected benefit = [z1 × V1(P’) + z2 × V2(P’) + z3 × V3(P’)] × W × A × (1 – R) / (1+r)L

This is just the same as the formula in PD242, but with V(P’) replaced by the large term in square brackets. Rather than being a single value, it’s now the weighted sum of several values.

In previous posts in this series, I’ve been critical of the common practice of weighting and adding up variables in certain cases. However, this formula shows that it is not always a mistake. If we don’t have dollar values, it’s reasonable to weight and add the separate values to get an indicator of the total value at stake, prior to adjusting it down for W, A, R and L, as shown. The big mistake that is commonly made is to also weight and add W, A, R and L into the equation, rather than including them in the way shown above. Weighting and adding can be appropriate, but needs to be applied in a way that makes logical sense, rather than indiscriminately to all variables.

If we were weighting the [V(P1) – V(P0)] version of the formula, it would look like this:

Expected benefit = {z1 × [V1(P1) – V1(P0)] + z2 × [V2(P1) – V2(P0)] + z3 × [V3(P1) – V3(P0)]} × A × (1 – R) / (1+r)L

You can see that the other version is more compact.

Choices about the weights need to consider the way that the different benefits are scored. If the values are in dollars, all the weights become 1.0, so you end up just adding up the values.

If the values are not in money terms, the weights reflect the relative importance of the different benefits (a very subjective judgement), but they also need to account for the ranges over which the values are scored. For example, if value scores range from zero to 1.0 for one benefit but zero to 100 for another, the second one should probably have a much smaller weight to avoid it dominating the rankings. If the two benefits were equally important, the weight for the first one would need to be 100 times larger than the weight for the second one.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research 40(2), 126-133. Journal web page ♦ Pre-publication version at IDEAS

242 – Ranking environmental projects 8: Time

Episode 8 in this series on principles to follow when ranking environmental projects. It is about how to account for time when estimating project benefits.  

Different projects involve different time lags until benefits are generated. There are at least four potential causes of lags:

  1. Some projects take a significant amount of time to implement (implementation lags). For example, if a project relies on new research being conducted, it could take several years before results are available. Typically, implementation lags for different types of project range from a year to a decade.
  2. It may take a while for the physical actions implemented in a project to take effect and to start generating benefits (effect lags). For example, if the project involves planting trees, they take a while to grow. Realistic effect lags in environmental projects range from zero to decades.
  3. The project may be addressing a threat which has not occurred yet but is expected to occur in future (threat lag). For example, between 2001 and 2008 the Australian Government invested in many projects that were intended to prevent the occurrence of dryland salinity in rural areas. In some of those areas, salinity was not predicted to occur until several decades in the future. In other cases, the threat lag was zero – the problem was already occurring.
  4. If a project requires other people to change their behaviour or management, it may take a while for most people to change (adoption lag). Realistic adoption lags for substantial changes range from around five years (in exceptional cases) to several decades.

Given this variety of lags types, and the range of lag lengths within each lag type, projects vary widely in the overall time lag until benefits are generated. This makes it an important factor to consider when ranking projects but it’s one that is commonly ignored.

Looking ahead from the time when an environmental project is being considered (time zero), a typical pattern of benefits over time (combining all the types of lags) is shown in Figure 7. This project generates half of its benefits by year 18 and 90% by year 25.

pd0242f7

Figure 7.

 

The question is, how should differences in the overall time lags for different projects influence their rankings? Here’s a simple example to illustrate how to think about it. (Inflation has been factored out of all the numbers in this example.)

sunsetSuppose there are two projects to rank. Both of them would involve costs of $1 million in 2014 and both generate benefits worth $2 million in the future. In one project, the $2m benefits would be delivered in 2018 while in the other, the benefits would occur in 2033. The $1m in 2014 has to be borrowed (at an interest rate of 5%) and will be repaid in full, including interest, when the benefits are generated. Thus, for the quick project, in 2018 we face a benefit (the $2m) and a cost equal to the repayment. Similarly, for the slow project, in 2033 there is a benefit of $2m and a different repayment cost. How do the costs and benefits stack up in each case?

The repayment cost would be calculated as follows:

Future repayment = Present cost × (1 + r)L

where Present cost is the amount borrowed up front, r is the interest rate (assumed to be 5%, after taking out inflation) and L is the time lag until the loan has to be repaid.

Compounding the interest costs over four years, the total repayment in 2018 would be $1.22 m. This is less than the $2 m benefit, so the quick project has a positive net benefit of $0.78 m in 2018. On the other hand, by 2033, the repayment would by $2.57 m, so the cost of the slow project would be greater than the benefit generated. Clearly, the quicker project would be preferred.

The same logic applies even if the money used to pay for the project doesn’t have to be borrowed. The money used has an ‘opportunity cost’ (you miss out on the benefits of investing it in some other way) and that cost compounds over time in the same way as the interest on a loan.

The way that economists usually apply this thinking is to use the present as the reference date. Instead of compounding present costs into the future, they discount future benefits (and costs) back to the present. It amounts to exactly the same thing when it comes to ranking projects, and it has the advantage that it is easy to compare discounted values to the current value of money. The formula for present values is just the reverse of the repayment formula:

Present value = Future value / (1 + r)L

Some people object to the idea of discounting future benefits, arguing that it is, in some sense, unfair or unreasonable. What they don’t realise is that it’s not really about the benefits – it’s about the costs. Interest costs (or other opportunity costs) are real costs and shouldn’t be ignored, but that’s what you would be doing if you refused to discount benefits.

It’s true that discounting benefits and costs in the distant future (e.g. 100 years) is more complicated, as issues of high uncertainty and inter-generational equity become important (e.g. see PD34), but for shorter time frames (up to say 30 years) the logic behind discounting is robust (PD224). A 5% real discount rate (with inflation factored out) is a pretty good general-purpose discount rate that’s suitable for many environmental projects.

In principle, if we knew the year-by-year pattern of benefits (like in Figure 7), we would discount the benefit for each year and add them up to get the total present value of benefits. That would be the measure of benefits we used in the top line of the Benefit: Cost Ratio. If you have the required information, this is simple to do. Of course, getting the required information might not be simple.

In practice, there is usually a lot of uncertainty about how the benefits will play out over time. Recognising that uncertainty, it is probably usually not worth being too precise about the shape of the curve. A highly simplified curve, like the one in Figure 8, might be sufficient. All you need to know to specify this curve is the peak level of benefits (corresponding to the plateau in Figure 7), and the year when it will be achieved (corresponding to the year in Figure 7 when most of the benefits would be achieved).

pd0242f8

Figure 8.

 

This benefit curve is based on another convenient assumption: that the benefits, once, generated, will last forever. This is not too unreasonable if allowance is made for long-term maintenance funding (see PD241 on project risks and a later post on project costs).

The simplified benefit curve has an advantage when it comes to calculating the present value of benefits. Rather than having to discount the benefits separately for each year (which is required if using Figure 7), we can just discount one number: the change in environmental values resulting from the project.

Benefit = [V(P1) – V(P0)] / (1+r)L

This works because the overall value of the environmental asset (V(P)) consists of the discounted sum of its future benefits. So, looking at Figure 8, the change in value at year 18 consists of the discounted sum of benefits in all subsequent years. To convert that into a present value, we just need to discount them for another 18 years, which is what the above equation does if we set L = 18.

As discussed in previous posts, we also need to allow for adoption/compliance and project risks, so the equation for expected benefit is:

Expected benefit = [V(P1) – V(P0)] × A × (1 – R) / (1+r)L

or

Expected benefit = V(P’) × W × A × (1 – R) / (1+r)L

[I have combined the various project risks into one variable, R, to stop the equation getting too ugly. See PD241.]

The time lag until benefits, L (=18 in Figure 8) links back to the earlier post about measuring benefits as the difference in environmental value with and without the project (PD237). In that post I noted that,

A practical simplification is to estimate the environmental benefits based on the difference in the asset value with and without the project in a particular future year. For example, we might choose to focus on 25 years in the future, and estimate values at that date with and without the project.

The selection of L tells us which particular future year to use for this with-vs-without comparison. So, for the example in Figure 8, the difference in values with and without the project would be estimated for year 18.

In PD239 I discussed several possible ways to estimate and represent environmental values, some of which didn’t involve expressing values in dollars (or Euros or Pounds or whatever). It might seem that discounting is not relevant to non-dollar values, but it is. Remember that discounting is used to account for the fact that an up-front cost grows over time due to compounded interest costs (or other opportunity costs), so it is applicable to any logical and consistent quantitative method for expressing future benefits. When comparing future benefits that occur at different times, you need to account for interest accumulating on up-front costs, even if the future benefits are not expressed in dollars. So you need to discount.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research 40(2), 126-133. Journal web page ♦ Pre-publication version at IDEAS

241 – Ranking environmental projects 7: Project risks

Episode 7 in this series on principles to follow when ranking environmental projects. It is about how to account for the risks that any given project may fail. 

Environmental projects don’t always go smoothly. There are various things that can stop them from delivering their intended benefits. One is the failure of enough people to change their behaviour in the desired ways, as discussed in PD240. In this post, I’ll discuss a number of additional risks that may affect projects. They need to be accounted for when ranking because projects vary greatly in how risky they are.

The word “risk” is used in so many different ways that it’s important to be clear about what I mean by it. The risks I’m talking about here are things that might stop the project from succeeding, not the risks to the environmental assets (the threats that are making, or might make, the environmental assets degrade). I’ll make some comments about those latter risks at the end of the post.

There are various types of project risks, potentially including:

  • Technical risk (Rt): the probability that the project will fail to deliver outcomes for technical reasons. Management actions are implemented but they don’t work because something breaks, or newly planted vegetation dies, or there was a miscalculation when designing the actions, or there is some sort of natural event that makes the actions ineffective.
  • Social/political risk (Rs): the probability that social or political factors will prevent project success. For example, a project might rely on another government agency to enforce existing environmental regulations, but that agency is not prepared to enforce them because of the likelihood of a political controversy. Or there might be community protest, or perhaps even legal action, to stop the project.
  • Financial risk (Rf): the probability that essential funding from partner organisations, or long-term funding for maintenance of benefits, will not be forthcoming. The latter one is often neglected. Many projects require ongoing funding for physical maintenance, or for continuing education or enforcement, without which the benefits would be lost. Often the decision to provide this ongoing funding is made independently of the decision to fund an initial project, so it is risky from the perspective of the funders of the initial project.
  • Management risk (Rm): if different projects will be managed by different organisations, then there are likely to be differences in the risk of failure related to management. These risks might include poor governance arrangements, poor relationships with partners, poor capacity of staff in the organisation, poor specification of milestones and timelines, or poor project leadership.

All four of these risks can be important and are worth accounting for.

Some of these risks relate to all-or-nothing outcomes (e.g. there either is successful legal action against the project or there isn’t), while others relate to continuous variables (e.g. maintenance funding might be deficient but not zero, resulting in some reduced level of ongoing benefits).

axe_creekRepresenting risks for continuous variables is possible, but it requires fairly detailed information. Given that we are making educated guesses when we specify these risks, going to that level of detail is probably not warranted. What I suggest is to approximate each of the risks as the probability of a binary (all-or-nothing) variable turning out badly. To illustrate, rather than trying to specify probabilities for each possible level of maintenance funding, we would just specify the probability of maintenance funding being so low that most of the benefits would be lost. We would assume that there are two possible outcomes for each risk: it causes the project to fail, or the project is fully successful (or, at least, as successful as the other factors allow it to be).

Some risks might be correlated. For example, if there is social or political resistance to a project, it might reduce the probability of it getting long-term maintenance funding. In theory we should account for this correlation too, but again my view is that it is not worth going to that level of detail. Reasons include that: the quality of information we have when specifying these risks is not high; the formula used for ranking projects would have to get pretty complicated; and it would be confusing to many people.

Given those simplifications, the expected benefits of a project are proportional to the probability of the project NOT failing (1 minus the risk), for each of the separate risks. Again, proportional means multiplying, so:

Expected benefit = [V(P1) – V(P0)] × A × (1 – Rt) × (1 – Rs) × (1 – Rf) × (1 – Rm)

or

Expected benefit = V(P’) × W × A × (1 – Rt) × (1 – Rs) × (1 – Rf) × (1 – Rm)

With these variables included, the benefits are now probability-weighted, so they are “expected” benefits, in the statistical sense of a weighted average, where the “weights” are the probabilities of success (1 minus the probability of failure).

If you wanted to further simplify the approach, you could potentially combine all four of the risks into a single risk variable (R) representing the joint probability of the project failing for any of the four reasons.

Expected benefit = [V(P1) – V(P0)] × A × (1 – R)

That has the advantage of simplicity. Its disadvantage is that the individual risks tend to get a bit lost, and perhaps under-estimated, in the combined risk variable. In my view, it’s worth taking the time to think separately about each of the risks, and if you do, you may as well have a variable for each.

Some organisations like to break risks down into likelihoods and consequences (as suggested in the ISO 31000 Risk Management Standard). Likelihoods represent the probability that a bad thing will happen (often scored on a scale like this: almost certain, likely, possible, unlikely or very unlikely), and consequence means how bad the bad thing would be if it did happen (e.g., scored as insignificant, minor, moderate, major or catastrophic). Depending on the combination of these two scores, the overall risk is assessed as minimal, low, medium, high or extreme.

This is a rather drawn-out way of getting a risk score (compared with just stating the probability of project failure), and I don’t think it’s necessary, but it is logical and may help people to think clearly about the issues.

If you take that approach, a key question is, how should the overall risk score (minimal through to extreme) be used in the project ranking process? My recommendation is that you convert it into a probability of project failure. For example, you might specify that minimal risk corresponds to a 0.05 probability of project failure, low is 0.1, medium is 0.2, high is 0.5 and extreme is 0.8.

Having done that, you should use the probability in the equations I’ve given above. What you definitely should not do is give the risk number a weighting and add it onto (or subtract it from) the rest of the equation, but I’ve seen that done! Doing that implies that the losses from a poor outcome for a tiny project are just the same as for an enormous project, which is obviously wrong.

Finally, I want to return to a different usage of the word “risk”, to mean a threat to the environmental asset. Environmental organisations sometimes conduct “risk assessments” in which they try to quantify the likely future extent of degradation from particular causes. In this series, we already dealt with that aspect of risk in PD237; it represents the difference between current environmental condition and future condition without the proposed project ((1) – (3) in Figure 1).

A concern with this sort of risk assessment is that it may distract attention away from the correct measure of project benefits. Having done a “risk assessment” and come up with estimates of (1) – (3), people seem to find it difficult not to include them in the ranking formula. However, the correct measure of project benefits is (2) – (3), and if you include (2) – (3), also including (1) – (3) can only make the rankings worse. The point is that this type of “risk assessment” only provides the “without” half of the information you need to estimate potential project benefits. You also need to know what would happen “with” the project.

Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research 40(2), 126-133. Journal web page ♦ Pre-publication version at IDEAS

240 – Ranking environmental projects 6: Adoption and compliance

Episode 6 in this series on principles to follow when ranking environmental projects. A factor that influences the level of benefits from many environmental projects is the extent to which people cooperate with the project by adopting new behaviours or practices. 

In PDs 237, 238 and 239 I talked about estimating the benefits of environmental projects, as part of the process of ranking projects. To keep things simple, I focused on the predicted environmental changes and their values, but there are other benefit-related factors that we need to account for too. The first of these is human behaviour.

Often, the success of a project depends on the behaviour of certain people. For example, the aim of the project might be to reduce eutrophication in an urban river by having people reduce their use of fertilizers in home gardens, or to reduce air pollution by having factories install systems to remove pollutants from chimney emissions.

lawn_fertThe issue is that, typically, not everybody cooperates with these sorts of projects. The degree of compliance varies from project to project, and this needs to be accounted for when we rank projects. Otherwise we risk giving funds to projects that have great potential but little benefit in practice.

Later on I’ll discuss the estimation of adoption/compliance for particular projects. First I want to talk about how this information should be included in the project-ranking process.

To start with, define A as the level of adoption/compliance as a proportion of the level needed to achieve the project’s goal. If A = 0.5, that means that compliance was only half the level we would have needed.

Usually, if A is less than 1.0, it doesn’t mean the project generates no benefits. There is some relationship between A and the benefits generated. Figure 6 shows one possible example, where proportional benefits [f(A)] increase slowly at low levels of adoption, then rapidly for a while, before flattening off again at high adoption. Other shapes are possible, but whatever the shape is, we know these important facts about it: it must range from zero (no adoption, so no project benefits) up to 1.0 (full adoption, so full project benefits). This follows from the fact that we define f(A) as the proportion of target project benefits achieved.

pd0240f1

Figure 6.

 

This makes it obvious how f(A) should be included in the formula we use for ranking projects: it should be multiplied by the potential benefits.

Benefit = [V(P1) – V(P0)] × f(A)

The terms in square brackets represent the difference in values between P1 (physical condition with the project) and P0 (physical condition without the project), assuming that there is full compliance, and this is scaled down by f(A) to reflect the effect of less-than-full compliance. In other words, [V(P1) – V(P0)] represents potential benefits, and we scale that down by f(A) to get actual benefits – actual in the sense of accounting for lower adoption. (Equivalently, using the approach outlined in PD239, Benefit = V(P’) × W × f(A).)

[Note that if a project does not require anybody to change their behaviour, you would set f(A) = 1.]

This formula demonstrates an important principle for the ranking formula: if the benefits are proportional to a variable (as they are for f(A)), then that variable must be multiplied by the rest of the equation for benefits. Only that way can the formula correctly represent the reality that, if the variable is zero, the benefits must be zero, and if the variable is at its maximum value, so too are the benefits. As a way of testing whether this is relevant, ask this question: if the variable was zero, would the overall benefits be zero? If the answer is yes, the variable should probably be multiplied.

Unfortunately, a common way that people combine variables like these in the ranking formula is to give them weights (meant to reflect their relative importance) and add them up, something like this:

Benefit = z1 × [V(P1) – V(P0)] + z2 × f(A)

where z1 and z2 are the weights. This is a bad mistake. With this formula, it is impossible to specify any set of weights that will make it represent the reality that the benefits are proportional to f(A). Experiments I’ve done with this formula show that it can result in wildly inaccurate project rankings, leading to a big loss of environmental values. Switching from this bad formula to the correct one can be like doubling the program budget (in terms of the extra benefits generated) (see PD158).

On the other hand, a simplification that is probably reasonable is to approximate f(A) by a straight line. In practice, we usually have too little information about the actual shape of f(A) in specific cases to be able to argue that its shape should be non-linear, and even if it is, it’s unlikely to be so non-linear that an assumption of linearity would have very bad consequences. If you are comfortable with this approximation, you can just use A in the formula rather than f(A).

Benefit = [V(P1) – V(P0)] × A

or

Benefit = V(P’) × W × A

That’s what we usually do in INFFER; in the absence of better information, and in the interests of simplicity, we use A rather than f(A) in the formula. But if somebody did have accurate numbers for f(A), we would use them instead.

Finally, some brief comments on predicting the level of compliance/adoption for a project. There has been a great deal of research into the factors that influence the uptake of new practices (e.g., Rogers 2003; Pannell et al., 2006, and see www.RuralPracticeChange.org), so we have a good understanding of this. There are many different influential factors, and the set of important factors varies substantially from case to case.

However, despite the wealth of research, it remains difficult to make quantitative predictions about compliance for a specific project. One generalisation I would make is that people who develop projects are usually too optimistic about the level and speed of adoption that is realistic to expect – sometimes far too optimistic.

Specific predictions require specific knowledge about the population of potential adopters, and the practice we would like them to adopt. As far as I’m aware, there is only one tool that has been developed to help make quantitative predictions about adoption. This is ADOPT, the Adoption and Diffusion Outcome Prediction Tool. ADOPT is designed for predicting adoption of new practices by farmers. It is not suitable for other contexts, although it might provide insights and understanding that help people to make the required judgments.

Further reading

Pannell, D.J., Marshall, G.R., Barr, N., Curtis, A., Vanclay, F. and Wilkinson, R. (2006). Understanding and promoting adoption of conservation practices by rural landholders. Australian Journal of Experimental Agriculture 46(11): 1407-1424. If you or your organisation subscribes to the Australian Journal of Experimental Agriculture you can access the paper at:http://www.publish.csiro.au/nid/72/paper/EA05037.htm (or non-subscribers can buy a copy on-line for A$25). Otherwise, email David.Pannell@uwa.edu.au to ask for a copy.

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, Wildlife Research 40(2), 126-133. Journal web page ♦ Pre-publication version at IDEAS

Rogers, E.M. (2003). Diffusion of innovations, 5th ed., Free Press, New York.