*Episode 11 in this series on principles to follow when ranking environmental projects. It covers a few issues related to the scoring of variables – taking information about a variable and converting it into a number that can be put into the equation for benefits. *

Let’s return to the benefits part of the equation for ranking environmental projects (the simple version with only one benefit).

*Expected benefit* = [*V*(*P*_{1}) – *V*(*P*_{0})] × *A* × (1 – *R*) / (1+*r*)^{L}

or

*Expected benefit* = *V*(*P*’) × *W* × *A* × (1 – *R*) / (1+*r*)^{L}

where *V* is the value of the asset (in whatever system of measurement makes sense), *W* is the effectiveness of works, *A* is adoption, *R* is risk and *L* is the time lag in years. (For the purpose of this discussion, ignore *r*. It doesn’t vary between projects. See PD242.)

There are two distinct groups of variables in these equations. There are two variables that can take any value greater than zero: *V* and *L*. And there are three that can take any value between zero and one: *W*, *A* and *R*.

All of them are “continuous” variables – they change smoothly and can take any value within their feasible ranges. If you had the information, you would plug their exact values into the equation.

However, you never have exact information. A common approach in systems that collect information for ranking projects is to present a discrete number of options for the value of the variable, and ask participants to select the value that seems to be nearest to the correct value. For example, here is a question of this type about technical risk.

*What is the probability that the benefits generated by the project would fall well short of expectations due to technical factors? (R _{t})*

- 0-5% Very low risk of project failure due to poor technical feasibility. (
*R*= 0.03)_{t} - 6-10% (
*R*= 0.08)_{t} - 11-15% (
*R*= 0.13)_{t} - 16-20% (
*R*= 0.18)_{t} - 21-100% High risk of long-term project failure due to poor technical feasibility. (
*R*= 0.60)_{t}

I don’t have a problem with this approach, as long as the response options are chosen thoughtfully. The quality of information available is usually not so high that this sort of approximation causes any significant reduction in the quality of the resulting rankings.

In the above example, I haven’t spaced out the response options for *R _{t}* equally between zero and one, because I judged that most projects have values between zero and 20%. I have indicated the mid-point of the range for each response option, and that is what I would plug into

*R*in the benefits equation.

_{t}Sometimes people convert the responses from this type of question into a number from an *ad hoc* scoring system, rather than using a scale that is more natural for the variable. For example, in the above case they might assign a score of 1 to the first response, 2 to the second response, and so on (instead of probabilities of 0.03, 0.08 and so on). This can potentially be OK, but there are a few traps to avoid.

Firstly, in a case like the one above where the response options are not equally spaced, the scores assigned should not be equally spaced either. They should be spaced out consistent with the values in the response options.

Secondly, if one of the response options represents zero, the score assigned to that option should be zero. For example, if a response option for adoption is zero adoption, it should get a score of zero so that when it is multiplied into the equation, the overall score is zero. (Obviously, if there is zero adoption of the actions being promoted by the project, there would be no environmental benefits attributable to the project.) In that case, using scores of 1, 2, 3, 4 or 5 is no good. If you must use scores instead of probabilities, use 0, 1, 2, 3 or 4 (assuming that the first response option is zero adoption).

Thirdly, even if you are using an *ad hoc* scoring system, you still have to multiply the variables, as shown in the above equations. Weighting them and adding them up, as done in many systems, produces much inferior rankings.

If benefits aren’t being measured in dollars, then a scoring system can work, as long as it satisfies the above requirements. However, my advice is to use the correct ranges when scoring each variable (i.e. between zero and one for *W*, *A* and *R*) rather than some *ad hoc* system. Why not do that? It is no more difficult, it makes it easy to meet all the above requirements, and it makes the meaning of each variable clearer.

If the benefits are being measured in dollars, then you don’t have an option. You have to assign values that correspond to the meanings of the variables rather than using an *ad hoc* scoring system. Otherwise you lose the benefit of being able to assess whether the benefits exceed the costs.

### Further reading

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects, *Wildlife Research* (forthcoming). Journal web page ♦ Pre-publication version at IDEAS