# Monthly Archives: June 2008

## 126 – Sensitivity analysis with economic models

I was helping someone recently with advice about a good basic strategy for doing sensitivity analysis with an economic model. Here is my advice.

Sensitivity analysis is the official term for “what if” analysis. It’s one of the things that makes economic models so useful. We know that any given model is not totally accurate, and that the variables that drive it will vary somewhat unpredictably over time, but using sensitivity analysis we can still get useful information and insight from the model. It helps you to get a feel for the stability of results, identify the factors that most affect results, and estimate the probability of a strategy performing up to a required level.

I quite often see studies where sensitivity analysis has been done quite poorly … What if this? What if that? … with insufficiently systematic thought about which variables to look at or how to vary them.

This Pannell Discussion provides a suggested procedure for conducting and reporting a useful sensitivity analysis. It is not perfect or universally applicable, and it may need to be adapted depending on the nature of your analysis. It’s meant to be a fairly simple standard approach that will work quite well in many cases. More detail is provided in Pannell (1997).

1. Decide which are the key results you are interested in from your model. Go back to the core question driving your analysis when thinking about this. Depending on the purpose of the analysis, it might be the economic returns (e.g. net profit or Net Present Value ), the difference between economic returns between two strategies or two scenarios, or the optimal level of a particular practice within the overall strategy. For later parts of the sensitivity analysis, you may need to think about how to capture the key results in a few numbers, possibly even in a single number.

2. Identify the parameters of the model that may influence its results.

3. Specify low, best-bet and high values for each of these parameters. Low and high should be for a one-year-in-five or one-year-in-10 scenario. Make the range of values a bit wider than you think. Often we don’t sufficiently expect the unexpected.

4. Solve the model for low and high values of each parameter, varying one parameter at a time and setting all others at their best-bet values. Choose say the six parameters that make the biggest difference to the key model result(s). Key model results might be Net Present Value (NPV) and Benefit:Cost Ratio (BCR), or it might the impact on NPV of a change in management.

5. For those six parameters, specify probabilities for the low, best-bet and high values. These probabilities must add to 1. Depending on how you specify the low and high parameter values, the probabilities could perhaps be [0.1, 0.8, 0.1]; or [0.2, 0.6, 0.2]; or [0.25, 0.5 and 0.25]. The parameter values for the first set of probabilities would be more widely spread than for the last set.

6. Generate model results for every combination of the 6 parameters – i.e. 36 = 729 sets of model results (or if that’s too many to handle, do it for five parameters: 35 = 243). Store them in a big table where the columns represent: the values of each parameter, the key model results for that scenario (e.g. NPV and BCR) and the joint probability of that result. Usually it is reasonable to assume that the distributions of the parameters are independent, so that the joint probabilities are just the products of each of the relevant probabilities.

7. Create a series of small tables that show a selection of results for say three parameters (ignoring probabilities for now). These tables just show how key results vary in a simple way. The variables not depicted in the table would be set at their best-bet values. The user could select two or three of these tables for inclusion in the report for the analysis. The format for each one would be as follows (in this example, the parameters of interest are called Sale price, Yield and Input cost):

8. For each value of each of the six parameters, take the weighted average of the results for all other results, as follows. Suppose the one of the six parameters is “sale price”. There are 243 results that include a low sale price. Take the weighted average of those 243 results using their conditional probabilities (i.e. conditional on a low sale price – to get the conditional probabilities, divide the individual probabilities of the 243 results by the probability of a low sale price). Repeat this process for Best-bet and High values of sale price. Plot the three results on a graph, with the X axis representing low, best-bet and high values of the parameter. Show these as “Low”, “Best-bet” and “High” rather than numerical values. Repeat that process for all 6 parameters. Plot them all on the same graph to create a so-called spider diagram. This indicates which parameters have the biggest influence on results. Do this in separate graphs for each of the key outputs (e.g. NPV and BCR). This spider diagram is better than the one you would have got if you had graphed the results of step 3 because it allows for the possibility that there will be strong interactions between parameters. In that case, you cannot judge things by varying one parameter at a time, but this approach of looking at all possible combinations will give you a good understanding of things.

9. Create a table that shows the average difference between high and low results for each parameter (from step 7) as a sort of sensitivity index. Rank the parameters by their sensitivity index values. Repeat for each key model output (e.g. BCR and NPV).

10. Finally, generate cumulative probability distribution functions for each key output. This will allow the user to see the probability of the result exceeding a specified threshold level (e.g. the probability that BCR will be at least 2). To do this graph for say BCR, from the big table from step 5, take all the BCR and probability values, sort them by BCR from lowest to highest, and then calculate cumulative probability. Then convert them to numbers rather than equations and store them somewhere. Plot BCR on the X axis and cumulative probability on the Y axis.

### Reporting the analysis

(a) In the methods section, present a table showing the low, best-bet and high parameter values, and a table showing their probabilities (or if they are the same for each parameter, just describe the probabilities in the text of the methods section).

(b) In the results section, start by presenting results for the base-case model, including only best-bet values of parameters. Describe and discuss this in some detail.

(c) Maybe present results for one or two other scenarios in moderate detail. This can be useful if there are what-if scenarios that you want to look at that can’t easily be represented by simple parameter changes.

(d) Present the two or three tables from step 7. Describe and discuss any interactions that are apparent.

(e) Present the spider diagram(s) from step 8. Discuss which variables have the biggest influence and the nature of that influence (positive, negative, linear, non-linear)

(f) Maybe present the table from step 9 (although you may think that the spider diagram is sufficient).

(g) Present the cumulative probability function diagram(s) from step 10. Describe and discuss the probabilities of outputs achieving relevant threshold levels, such as breaking even.

(h) If there is space, include the big table from step 6 in an appendix.

This provides a good standard set of sensitivity analysis results that should give you all you practically need in many cases.

David Pannell, The University of Western Australia

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16: 139-152. Full paper (100 K)

## 125 – Adoption of conservation practices – paper tops the charts

When I was young I dreamed of topping the charts, like The Beatles. Now I’ve finally achieved it, though not quite in the way that I intended back then.

Last month one of my papers reached the top of the chart published here by the Australian Journal of Experimental Agriculture, showing the most downloaded papers since they began keeping records in 2000. The paper is called “Understanding and promoting adoption of conservation practices by rural landholders”, co-authored with Graham Marshall (University of New England), Neil Barr (Department of Primary Industries Victoria), Allan Curtis (Charles Sturt University), Frank Vanclay (University of Tasmania) and Roger Wilkinson (Department of Primary Industries Victoria).

There have been about 1300 papers published by that journal since 2000, and older papers are included on the web site too, so there is plenty of competition for top spot. We moved steadily up the chart after publication in October 2006 and reached number 2 after about 10 months, but then it took another 9 months to overtake the top paper.

I’m really pleased that this paper has had some attention. For one thing, it is a great example of cross disciplinary collaboration. It was an enjoyable challenge to produce a paper that successfully combines the disciplines of the authors (economics, rural sociology and social psychology). We each had our own perspectives, of course, but we actually agreed about a lot – more than one might have expected given some of the cross disciplinary sniping that one can find in the literature.

Another reason I am pleased about the attention is that I think the paper has some really important messages. Here are a few snippets:

“Depending on their personal and family circumstances, the issues about which landholders are most concerned at a particular time may not relate to conservation, or any aspect of land management.”

“People who adopt one innovation early are not necessarily early adopters of all innovations.”

“… the relative advantage that drives adoption may not necessarily relate to the environment. Indeed, environmental benefits can often be most readily achieved by developing conservation practices that provide a commercial advantage to farmers.”

“… adoption of conservation practices by landholders is not solely a biophysical issue, it is also an economic, social and psychological issue, so biophysical researchers can benefit from working closely with economists, sociologists and psychologists. Social scientists should be involved in projects from an early stage, including in problem definition and project design, so that their advice can influence the direction of the research, rather than being limited to analyzing the results (e.g. attempting to explain landholders’ responses or lack of response).”

“If a practice is not adopted in the long term, it is because landholders are not convinced that it advances their goals sufficiently to outweigh its costs. A consequence of this is that we should avoid putting the main burden for promoting adoption onto communication, education and persuasion activities. This strategy is unfortunately common, but is destined to fail if the innovations being promoted are not sufficiently attractive to the target audience. The innovations need to be ‘adoptable’. If they are not, then communication and education activities will simply confirm a landholder’s decision not to adopt, as well as degrade the social standing of the field agents of the organisation. Extension providers should invest time and resources in attempting to ascertain whether an innovation is adoptable before proceeding with extension to promote its uptake.”

I’ve suggested to my co-authors that we should do something to celebrate our number 1 hit. Perhaps we could further emulate The Beatles who had a big all-night party in Paris when “I Want to Hold Your Hand” became their first US number 1.

Like the Beatles, we might also go on tour (a very, very short one), running a national workshop in which we each present our slant on the adoption issue. It’s an idea we’re contemplating, anyway.

It would be appropriate at this stage to acknowledge the role of the CRC for Plant-Based Management of Dryland Salinity in fostering the paper. Back in 2000, most of the eventual authorship team attended a meeting to discuss what adoption-related work we should do for the CRC. We agreed on the idea of doing this paper, and after a fairly long gestation period and a slight growth in the authorship team, we eventually delivered.

David Pannell, The University of Western Australia

Pannell, D.J., Marshall, G.R., Barr, N., Curtis, A., Vanclay, F. and Wilkinson, R. (2006). Understanding and promoting adoption of conservation practices by rural landholders. Australian Journal of Experimental Agriculture 46(11): 1407-1424.

If you or your organisation subscribes to the Australian Journal of Experimental Agriculture you can access the paper at: http://www.publish.csiro.au/nid/72/paper/EA05037.htm (or non-subscribers can buy a copy on-line for A\$25). Otherwise, email David.Pannell@uwa.edu.au to ask for a copy.

## 124 – Linking natural resource research to the real world

Attempting to ensure that complex research into natural resource management or the environment is used by decision makers is very challenging. Here I outline the strategy we are using in current Australian research.

Recently I attended an interesting workshop in Adelaide, on the theme of “Integrated Landscape Science”. The focus was on the establishment of a new research centre on that topic, but in the course of the discussions we spent time talking about the process of linking research on natural resource management to the real world, particularly to policy makers and environmental managers. There was a good discussion about the process we have gone through in the SIF3 project (http://dpannell.fnas.uwa.edu.au/sif3.htm . Our facilitator captured the process we had used, which I’ve elaborated on below. There are obviously different approaches or different emphases that one could apply, but this is an approach that has worked well for us.

### Problem definition phase

This requires good interaction between decision makers and researchers. The researchers need to take a problem-solving approach to the issue, rather than concerning themselves with capturing its full complexity.

### Complexity phase

Researchers (bringing in input from various stakeholders) set about understanding and modelling the system, with an emphasis on management decisions. This process needs to bring together physical, biological, economic and social aspects of the problem, perhaps in a highly integrated computer model, but not necessarily. In the case of SIF3, we made use of results from a large variety of models and analyses, but didn’t try to represent the entire system in a single complex integrated model. We got to that degree of integration in the simplicity phase.

### Simplicity phase

From the results of the complexity phase, develop a decision framework based on robust rules of thumb about management. How does the best management response vary in different bio-physical or socio-economic circumstances? Which are the variables that have the greatest influence on management? The focus is on developing messages that are scientifically valid, but as simple as possible. This is a highly creative process, and is probably more difficult than the complexity phase. There isn’t a well established way to go about this, although sensitivity analysis is certainly an important element. During this phase, the researchers need a willingness to challenge conventional wisdom. Based on our experience with SIF3, and the general experience with computerised tools, it is probably better if the decision tool developed in this phase is not computer based. SIF3 is simply a documented decision tree. It would be “paper-based” if we printed it out.

Application of the tool proceeds through three phases: pilot, help-desk and consultancy.

### Pilot phase

With SIF3, we underwent a phase of applying the initial version of the tool with two partner organisations. This helped to test the applicability, usability and understandability of the tool. It also identified gaps that needed to be filled and concepts that needed to be explained more clearly. It built trust and confidence with our partners, and helped us develop a process within which the tool could effectively be applied. It allowed us to develop capacity among a group of users, which has paid off in later phases through us employing two of them to work with other users.

It is important that the researchers be heavily involved in the piloting of their decision framework. Leaving its application to users at too early a stage carries a high risk of it being misinterpreted or misapplied, so that it is not able to be evaluated fairly.

### Help-desk phase

Once the researchers are confident about the usefulness and validity of the decision tool, the next phase is somewhat more hands off: supporting users to apply the tool themselves. We are currently offering a help-desk service and detailed advice about the process to five environmental management organisations who are applying SIF3 or INFFER (a more general version that allows threats other than salinity to be assessed). We judged that it was important to include this phase, mid-way between us doing it for them, and them doing it entirely themselves. Through this phase we are learning what level and what type of support is needed over which issues, and we are developing training materials for a broader group of users. Given the complexity and difficulty of management decisions for the environment, it is not realistic to expect people to find it easy to understand and apply the framework. Indeed, there are likely to be aspects of the framework that conflict with their preconceptions and their current way of thinking about the problem.

### Consultancy phase

Our aim is to reach a stage where the framework is very widely used, and support is provided as a commercial service (not by us), or else is provided by government. We expect that the training materials we are currently developing will be an important part of this.

### Other requirements

Most research in this area concentrates on the complexity phase. If the research is directly commissioned by decision makers, this is perhaps sufficient. Otherwise, a considerable additional effort by the researchers is probably needed to ensure that the research can have its potential impact. Few researchers actually attempt anything like the process outlined above. The process requires: extra resources beyond those for the research phase, patience, persistence, excellent communication and good partnerships with users. Given this, perhaps it is not surprising that few researchers take such an approach. On the other hand, it certainly is satisfying to see our work having an impact, and my experience with this work has convinced me that research impacts in this field don’t come easily.

David Pannell, The University of Western Australia