Category Archives: Research

326 – 60-second videos about our research

My School at the University of Western Australia is having a competition amongst staff and students to produce a 60-second video that says something interesting and engaging about our research.

I’ve put in two entries. The first one, about farmer adaptation to climate change, is the fun one.

The second one, about water pollution, is more traditional, but I hope it’s still interesting.

I’m also included in a third really creative entry that was put together by Maksym Polyakov.

Wish us luck. The winner will be announced in December.

Further reading

Thamo, T., Addai, D., Kragt, M.E., Kingwell, R., Pannell, D.J., and Robertson, M.J. (2019). Climate change reduces the mitigation obtainable from sequestration in an Australian farming system, Australian Journal of Agricultural and Resource Economics (forthcoming). Journal web site

Thamo, T., Addai, D., Pannell, D.J., Robertson, M.J., Thomas, D.T. & Young, J.M. (2017). Climate change impacts and farm-level adaptation: Economic analysis of a mixed cropping–livestock system, Agricultural Systems 150, 99-108. Journal web page * IDEAS page

Pannell, D.J. (2017). Economic perspectives on nitrogen in farming systems: managing trade-offs between production, risk and the environment, Soil Research 55, 473-478. Journal web site

Rogers, A.A., Burton, M.P., Cleland, J.A., Rolfe, J., Meeuwig, J.J. & Pannell, D.J. (2017). Expert judgements and public values: preference heterogeneity for protecting ecology in the Swan River, Western Australia, Working Papers 254025, University of Western Australia, School of Agricultural and Resource Economics. IDEAS page

318 – Measuring impacts from environmental research

There have been some studies considering the relationship between research and environmental policy but studies capturing the impact of research on environmental management, environmental policy, and environmental outcomes are relatively rare. Here is one attempt.

Environmental research may generate benefits in a variety of ways including by providing: information or technology that allows improved management of an environmental issue; information that fosters improved decision-making about priorities for environmental management or policy; or information about an environmental issue that is of intrinsic interest to the community. There are several reasons why it can be worth measuring the impacts of environmental research, including making a case for the funding of environmental research, informing decisions about research priorities, and helping researchers to make decisions about their research that increase its ultimate benefits.

Earlier this year we released the results of an assessment of the engagement and impacts of a particular environmental research centre, the ARC Centre of Excellence for Environmental Decisions (CEED). The assessment includes impacts on policy, management and the community, as well as measures of academic performance, including publications, citations and collaborations. Data were collected in several ways: a survey of all project leaders for the Centre’s 87 projects, the preparation of detailed case studies for selected projects, and collection of statistics on publications, citations and collaborations.

The approach taken was informed by a recent paper of ours called “Policy-oriented environmental research: What is it worth?” (Pannell et al. 2018). The full report is available here.

The Centre’s engagement with end users and stakeholders was strong in Australia and around the world. Researchers reported many examples of engagement with research users involved in policy and management. Results were highly heterogeneous and somewhat skewed, with the majority of observed impact occurring in a minority of the projects.

For almost half of the projects, the potential future increase in impact was assessed as being moderate or high. To some extent, this reflects the time lags involved in research attempting to influence policy and management, but the information was also used to identify projects for which additional engagement effort could be beneficial. The correlation between impact and academic performance was positive but low.

To obtain richer detail about impacts, detailed case studies were prepared for nine research projects. The projects were selected to be diverse, rather than representative. These case studies highlight the unique circumstances faced by each project in endeavouring to have an impact. Each project must be framed within a strong understanding its domain and be deeply engaged with research users if impact is to occur. Substantial benefits for policy or management are apparent in a number of the case studies.

A factor contributing greatly to the impact of CEED was the research communication magazine Decision Point. This publication was widely accepted as a valued communication resource for academic findings in the field of environmental decision sciences, and was rated by people in government and academic institutions as relevant and informative.

Some valuable lessons and implications of the impact analysis are identified in the report. Research impact does not depend only on good relationships, engagement and communication, but also importantly on what research is done. Therefore, embedding a research culture that values impact and considers how it may be achieved before the selection of research projects is potentially important. The role of the Centre leadership team in this is critical. Embedding impact into the culture of a centre likely occurs more effectively if expertise in project evaluation is available internally, either through training or appointments.

A challenge in conducting this analysis was obtaining information related to engagement and impact. There may be merit in institutionalising the collection of impact-related data from early in the life of a new research centre.

Interestingly, we found little relationship between (a) impact from translation and engagement and (b) measures of academic merit. It should not be presumed that the most impactful projects will be those of greatest academic performance.

At the time of the assessment, CEED had generated 848 publications which had been cited 14,996 times according to the Web of Science. CEED publications are disproportionately among the most cited papers in their disciplines. More than a quarter of CEED publications are in the top 10% of the literature, based on their citations. For 39 CEED publications (one in 22), their citations place them in the top 1% of their academic fields in the past 10 years.

There are often long lags between the start of research and delivering the impact — decades in many cases. Therefore, there is a need to allow the longest possible time lag when assessing research impact. On shorter timescales, it may be possible to detect engagement, but not the full impact that will eventually result.

Further reading

Pannell, D.J., Alston, J.M., Jeffrey, S., Buckley, Y.M., Vesk, P., Rhode, J.R., McDonald-Madden, E., Nally, S., Gouche, G. and Thamo, T. (2018). Policy-oriented environmental research: What is it worth? Environmental Science and Policy 86, 64-71. Journal web page

Thamo, T., Harold, T., Polyakov, M. and Pannell, D. (2018). Assessment of Engagement and Impact for the ARC Centre of Excellence for Environmental Decisions, CEED, University of Queensland. http://ceed.edu.au/resources/impact-report.html

279 – Garbage in, garbage out?

As the developer of various decision tools, I’ve lost track of the number of times I’ve heard somebody say, in a grave, authoritative tone, “a model is only as good as the information you feed into it”. Or, more pithily, “garbage in, garbage out”. It’s a truism, of course, but the implications for decision makers may not be quite what you think.

The value of the information generated by a decision tool depends, of course, on the quality of input data used to drive the tool. Usually, the outputs from a decision tool are less valuable when there is poor-quality information about the inputs than when there is good information.

But what should we conclude from that? Does it mean, for example, that if you have poor quality input information you may just as well make decisions in a very simple ad hoc way and not worry about weighing up the decision options in a systematic way? (In other words, is it not worth using a decision tool?) And does it mean that it is more important to put effort into collecting better input data rather than improving the decision process?

No, these things do not follow from having poor input data. Here’s why.

Imagine a manager looking at 100 projects and trying to choose which 10 projects to give money to. Let’s compare a situation where input data quality is excellent with one where it is poor.

decision_aheadFrom simulating hundreds of thousands of decisions like this, I’ve found that systematic decision processes that are consistent with best-practice principles for decision making (see Pannell 2013) do a reasonable job of selecting the best projects even when there are random errors introduced to the input data. On the other hand, simple ad hoc decision processes that ignore the principles often result in very poor decisions, whether the input data is good, bad or indifferent.

Not every decision made using a sound decision process is correct, but overall, on average, they are markedly better than quick-and-dirty decisions. So “garbage in, garbage out” is misleading. If you look across a large number of decisions (which is what you should do), then a better description for a good decision tool could be “garbage in, not-too-bad out”. On the other hand, the most apt description for a poor decision process could be “treasure or garbage in, garbage out”.

An interesting question is, if you are using a good process, why don’t random errors in the input data make a bigger difference to the outcomes of the decisions? Here are some reasons.

Firstly, poorer quality input data only matters if it results in different decisions being made, such as a different set of 10 projects being selected. In practice, over a large number of decisions, the differences caused by input data uncertainty are not as large as you might expect. For example, in the project-selection problem, there are several reasons why data uncertainty may have only a modest impact on which projects are selected:

  • Uncertainty doesn’t mean that the input data for all projects is wildly inaccurate. Some are wildly inaccurate, but some, by chance, are only slightly inaccurate, and some are in between. The good projects with slightly inaccurate data still get selected.
  • Even if the data is moderately or highly inaccurate, it doesn’t necessarily mean that a good project will miss out on funding. Some good projects look worse than they should do as a result of the poor input data, but others are actually favoured by the data inaccuracies, so of course they still get selected. These data errors that reinforce the right decisions are not a problem.
  • Some projects are so outstanding that they still seem worth investing in even when the data used to analyse them is somewhat inaccurate.
  • When ranking projects, there are a number of different variables to consider (e.g. values, behaviour change, risks, etc.). There is likely to be uncertainty about all of these to some extent, but the errors won’t necessarily reinforce each other. In some cases, the estimate of one variable will be too high, while the estimate of another variable will be too low, such that the errors cancel out and the overall assessment of the project is about right.

So input data uncertainty means that some projects that should be selected miss out, but many good projects continue to be selected.

Even where there is a change in project selection, some of the projects that come in are only slightly less beneficial than the ones that go out. Not all, but some.

Putting all that together, inaccuracy in input data only changes the selection of projects for those projects that: happen to have the most highly inaccurate input data; are not favoured by the data inaccuracies; are not amongst the most outstanding projects anyway; and do not have multiple errors that cancel out. Further, the changes in project selection that do occur only matter for the subset of incoming projects that are much worse than the projects they displace. Many of the projects that are mistakenly selected due to poor input data are not all that much worse than the projects they displace. So input data uncertainty is often not such a serious problem for decision making as you might think. As long as the numbers we use are more-or-less reasonable, results from decision making can be pretty good.

To me, the most surprising outcome from my analysis of these issues was the answer to the second question: is it more important to put effort into collecting better input data rather than improving the decision process?

As I noted earlier, the answer seems to be “no”. For the project choice problem I described earlier, the “no” is a very strong one. In fact, I found that if you start with a poor quality decision process, inconsistent with the principles I’ve outlined in Pannell (2013), there is almost no benefit to be gained by improving the quality of input data. I’m sure there are many scientists who would feel extremely uncomfortable with that result, but it does make intuitive sense when you think about it. If a decision process is so poor that its results are only slightly related to the best possible decisions, then of course better information won’t help much.

Further reading

Pannell, D.J. and Gibson, F.L. (2014) Testing metrics to prioritise environmental projects, Australian Agricultural and Resource Economics Society Conference (58th), February 5-7, 2014, Port Macquarie, Australia. Full paper

Pannell, D.J. (2013). Ranking environmental projects, Working Paper 1312, School of Agricultural and Resource Economics, University of Western Australia. Full paper

273 – Behaviour change comes in pairs

Some key factors that drive adoption of new practices come in pairs: one aspect related to the performance of the new practice, and one aspect related to how much people care about that performance. Many models of adoption miss this, including famous ones.

Whatever work or hobbies we do, there are regularly new practices coming along that we are encouraged to adopt: new technologies (e.g. a new iPhone, an auto-steer crop harvester), or different behaviours (e.g. reducing our usage of energy or water, changing the allocation of land to different crops).

The agricultural examples above reflect that some of my research is on adoption of new practices by farmers, but the issue I’m talking about today is relevant in all spheres where people adopt new practices.

It is well recognised that people vary in the personal goals that drive their choices about whether to adopt new practices that are promoted to them. Amongst commercial farmers, for example, there are differences in the emphases they give to profit, risk and environmental outcomes.

Any attempt to understand or model adoption of new practices needs to recognise the potential importance of these different goals. Many studies do include variables representing these three goals, and sometimes others.

However, it is less often recognised that there are two aspects to each of these goals when looking at a new practice:

  1. The extent to which the new practice would deliver the outcome measured by that goal: more profit, less risk, or better environmental outcomes.
  2. How much the decision maker cares about those particular outcomes.

These two aspects are closely linked. They interact to determine how attractive a new practice is, but they are distinctly different. One is not a proxy for the other.

extension 1For example, suppose a farmer is considering two potential new practices for weed control. The farmer judges that new practice A is much riskier (less reliable) than new practice B.

How much will this affect the farmer’s decision making? That depends on the farmer’s attitude to risk. For a farmer who has a strong aversion to risk, practice B will be strongly favoured, at least from the risk perspective. (Other goals will probably also come into play as well.) For a farmer who doesn’t care about risk one way or the other, the difference in riskiness between practices A and B is of no consequence. Some farmers (a minority) have been found to be risk-seeking, so they would prefer practice A.

The same sort of pattern occurs with other goals as well. The attractiveness of a new practice depends on how much difference it makes to profit and on how strongly the farmer is motivated by profit. Or how much it affects the environment and how strongly the farmer cares about the environment.

Amongst the thousands of research studies of farmer adoption of new practices, most represent only one goal-related variable where two are needed. For example, they include a measure of risk aversion, but ignore differences in the level of riskiness of the new practice amongst different adopters. Or they represent differences in the profitability of the new practice, but not differences in how much the adopters care about profit.

It doesn’t help that the issue is not recognised in common conceptual frameworks used by social scientists studying adoption behaviour, such as the Theory of Reasoned Action (Fishbein and Ajzen 1975) and the Theory of Planned Behaviour (Ajzen 1991).

It should be recognised in a sound economics framework (e.g. Abadi Ghadim and Pannell 1999 do so for risk), but it often isn’t included in the actual numerical model that is estimated.

The only framework I’ve seen that really captures this issue properly is our framework for ADOPT – the Adoption and Diffusion Outcome Prediction Tool. Hopefully this insight can diffuse to other researchers over time.

Further reading

Abadi Ghadim, A.K. and Pannell, D.J. (1999). A conceptual framework of adoption of an agricultural innovation, Agricultural Economics 21, 145-154. Journal web page ◊ IDEAS page

Ajzen, I. (1991). The theory of planned behavior, Organizational Behavior and Human Decision Processes 50, 179-211.

Fishbein, M. and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley.

259 – Increasing environmental benefits

It is obvious that the budgets of our public environmental programs are small relative to the cost of fixing all of our environmental problems. If we want to achieve greater environmental benefits from our public investments, what, in broad terms, are the options?

I remember seeing a graph last year – I think it was from the Australian Bureau of Statistics – showing the level of concern felt by the Australian community about environmental issues. It looked to have peaked a few years ago, and was pretty flat, or slightly declining. In that context, the prospects for a big increase in environmental spending over time don’t look good, particularly given the general tightness of government budgets. So I was wondering, if we wanted to double the environmental values protected or enhanced by our public programs, what are the options? I was able to identify several. I’ll list them here, and briefly comment on their potential effectiveness, cost and political feasibility.

  1. Double the budget. Effectiveness: high (in the sense that we could actually double the environmental benefits generated). Cost: high. Politics: very unlikely in the foreseeable future. It wouldn’t be my first priority, anyway. Increasing the budget would be more effective if we first delivered some of the strategies below.
  2. Improve the prioritisation of environmental investments. Improve the usage of evidence, the quality of decision metrics (Pannell 2013), and the quality of evaluation of proposals. Effectiveness: high (because most programs currently have major deficiencies in these areas). Cost: low, especially relative to doubling the budget. Politics: Implies a higher degree of selectivity, which some stakeholders dislike. Probably means funding fewer, larger projects. Achievable for part of the budget but the politics probably require a proportion to be spent along traditional lines (relatively unprioritised).
  3. murray_riverEncourage more voluntary pro-environmental action through education, persuasion, peer pressure and the like. Effectiveness: commonly low, moderate in some cases. Cost: moderate. Politics: favourable.
  4. Increase the share of environmental funds invested in research and development to create pro-environmental technologies (Pannell 2009). Note that this is about creation of new technologies, rather than information. Examples could include more effective baits for feral cats, new types of trees that are commercially viable in areas threatened by dryland salinity, or new renewable energy technologies. Feasibility: case-specific – high in some cases, low in others. Cost: moderate. Politics: requires a degree of patience which can be politically problematic. Also may conflict with community desire to spend resources directly on on-ground works (even if the existing technologies are not suitable). There tends to be a preference for research funding to come from the research budget rather than the environment budget, although this likely means that it is not as well targeted to solve the most important environmental problems.
  5. Improve the design of environmental projects and programs. Improve evidence basis for identifying required actions. Improve selection of delivery mechanisms. Improve the logical consistency of projects. Effectiveness: high (because a lot of existing projects are not well founded on evidence, and/or don’t use appropriate delivery mechanisms, and/or are lacking in internal logical consistency). Cost: low. Politics: Implies changes in the way that projects are developed, with longer lead times, which may not be popular. There may be a perception of high transaction costs from this strategy (although they would be low relative to the benefits) (Pannell et al. 2013).
  6. Increase the emphasis on learning and using better information. Strategies include greater use of detailed feasibility studies, improved outcome-oriented monitoring, and active adaptive management. Effectiveness: moderate to high. Would feed into, and further improve, options 2 and 5. Cost: low. Politics: main barrier is political impatience, and a view that decisions based on judgement are sufficient even in the absence of good information. Often that view is supported/excused by an argument that action cannot and should not wait (which is a reasonable argument in certain cases, but usually is not).
  7. Reform inefficient and environmentally damaging policies and programs. Examples include subsidies for fossil fuels, badly designed policies supporting biofuels in Europe and in the USA, and agricultural subsidies. This strategy is quite unlike the other strategies discussed here, but it has enormous potential to generate environmental benefits in countries that have these types of policies. Successful reform would be not just costless, but cost-saving. Effectiveness: very high in particular cases. Cost: negative. Politics: difficult to very difficult. People with a vested interest in existing policies fight hard to retain them. Environmental agencies don’t tend to fight for this, but there could be great benefits if they did.

In my judgement, for Australia, the top priorities should be strategies 2 and 5 followed by 6. Strategy 4 has good potential in certain cases. If these four strategies were delivered, the case for strategy 1 would be greatly increased (once the politics made that feasible). To succeed, strategies 2, 5 and 6 would need an investment in training and expert support within environmental organisations. Over time, in those environmental organisations that don’t already perform well in relation to strategies 2, 5 and 6 (i.e. most of them), there may be a need for cultural change, which requires leadership and patience.

In Europe and the USA, my first choice would be strategy 7, if it was politically feasible. After that, 2, 5, 6 and 4 again.

Further Reading

Garrick, D., McCann, L., Pannell, D.J. (2013). Transaction costs and environmental policy: Taking stock, looking forward, Ecological Economics 88, 182-184. Journal web site

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Improving environmental decisions: a transaction-costs story, Ecological Economics 88, 244-252. Journal web siteIDEAS page

Pannell, D.J. (2009). Technology change as a policy response to promote changes in land management for environmental benefits, Agricultural Economics 40(1), 95-102. Journal web page ◊ Prepublication version

Pannell, D.J. (2013). Ranking environmental projects, Working Paper 1312, School of Agricultural and Resource Economics, University of Western Australia. IDEAS page ◊ Blog series