Category Archives: Research

338. Modelling COVID-19

When there is a serious epidemic or a pandemic such as COVID-19, numerous epidemiological modelling groups around the world get busy. How should these various groups be coordinated to generate the most useful information to guide how an outbreak should be managed?

I’m excited to say that I have a new paper out in Science that addresses this question. In doing this project, I got to rub shoulders (in a virtual sense) with an international team of six modellers and epidemiologists from the US, the UK and China.

I was invited to join the team by the lead author, Katriona Shea, an ecologist from Penn State University who specialises in the management of populations of plants and animals and of disease outbreaks. She spent a sabbatical with us in the Centre for Environmental Economics and Policy at UWA in 2018, learning about economics and behaviour.

Katriona found that the sorts of things we do could be useful in her world. Aspects of the design of our proposed modelling process were designed with behaviour change (by modelling teams) in mind.

Here’s an extract from the official news release from Science. The full release is here.

“A new process to harness multiple disease models for outbreak management has been developed by an international team of researchers. The team describes the process in a paper appearing May 8 in the journal Science and was awarded a Grant for Rapid Response Research (RAPID) from the National Science Foundation to immediately implement the process to help inform policy decisions for the COVID-19 outbreak.

During a disease outbreak, many research groups independently generate models, for example projecting how the disease will spread, which groups will be impacted most severely, or how implementing a particular management action might affect these dynamics. These models help inform public health policy for managing the outbreak.

“While most models have strong scientific underpinnings, they often differ greatly in their projections and policy recommendation,” said Katriona Shea, professor of biology and Alumni Professor in the Biological Sciences, Penn State. “This means that policymakers are forced to rely on consensus when it appears, or on a single trusted source of advice, without confidence that their decisions will be the best possible.”

We designed our process to achieve a number of aims.

  1. Get the modelling groups working on the issues that will be most helpful for decision making.
  2. Help decision makers tap into the expertise of the full range of modelling groups. Currently, they sometimes pick a winner and go with the predictions of a single model, ignoring the significant variation between models.
  3. Foster learning between the groups, so as to maximise the quality of predictions made. Currently, when multiple models are used, the usual approach is to just take an average of their results. Our process requires the modelling groups to discuss the reasons for their differences, and to adjust their models if appropriate once they understand those reasons.
  4. Reduce bias in the decision process. Likely biases to guard against include dominance
    effects (agreeing with field “leaders”), starting-point bias or anchoring (focusing on suggestions raised early in the process to the detriment of other ideas), and groupthink (where a psychological desire for cohesiveness causes a group of collaborators to minimize conflict and reach a consensus without sufficient critical evaluation).
  5. Don’t delay the decision-making process.
  6. Make it attractive for the modelling groups to participate in the process.

Our process works as follows.

(a). The decision-making body defines the objective (e.g., minimise caseload), and specifies the management options to be assessed and communicates these to multiple modelling teams (Aims 1 and 2).

(b) The teams model the specified management options, working independently to avoid prematurely locking in on a certain way of thinking (Aims 2 and 4).

(c) The decision-making body coordinates a process where the modelling teams discuss their results, providing feedback and ideas to each other, and learning how they might improve their models (Aim 3).

(d) The teams again work independently (Aim 4) to produce another set of model results with their improved models. The full set of results is collated and considered by decision makers, not just the average (Aim 2).

(e) Information from step (b) can be used for initial decision making, without waiting for steps (c) and (d), so no time is lost (Aim 5). If the new results from step (d) indicate that the best management response is different than initially indicated, the response can be adjusted. We’ve seen plenty of adaptations to strategies over time by governments in the current pandemic.

(f) Benefits for the modelling teams themselves (Aim 6) include that they still essentially operate independently and can publish their own work; that the final quality of their model predictions is probably better; and that they can be confident that their results will be explicitly considered by the decision makers.

In some ways, this might seem like a common-sense approach, but in practice, it is rather different from what is currently done, at least in the contexts that the team of authors is aware of.

It is particularly exciting that Katriona has managed to obtain funding to roll out this approach immediately. She is already working with a collection of modelling groups in the US. The team will share results with the U.S. Centers for Disease Control and Prevention as they are generated.

Further reading

Shea, K., Runge, M.C., Pannell, D., Probert, W., Shou-Li, L., Tildesley, M. and Ferrari, M. (2020). Harnessing the power of multiple models for outbreak management, Science 368(6491), 577-579. Journal web page

326. 60-second videos about our research

My School at the University of Western Australia is having a competition amongst staff and students to produce a 60-second video that says something interesting and engaging about our research.

I’ve put in two entries. The first one, about farmer adaptation to climate change, is the fun one.

The second one, about water pollution, is more traditional, but I hope it’s still interesting.

I’m also included in a third really creative entry that was put together by Maksym Polyakov.

Wish us luck. The winner will be announced in December.

Further reading

Thamo, T., Addai, D., Kragt, M.E., Kingwell, R., Pannell, D.J., and Robertson, M.J. (2019). Climate change reduces the mitigation obtainable from sequestration in an Australian farming system, Australian Journal of Agricultural and Resource Economics (forthcoming). Journal web site

Thamo, T., Addai, D., Pannell, D.J., Robertson, M.J., Thomas, D.T. & Young, J.M. (2017). Climate change impacts and farm-level adaptation: Economic analysis of a mixed cropping–livestock system, Agricultural Systems 150, 99-108. Journal web page * IDEAS page

Pannell, D.J. (2017). Economic perspectives on nitrogen in farming systems: managing trade-offs between production, risk and the environment, Soil Research 55, 473-478. Journal web site

Rogers, A.A., Burton, M.P., Cleland, J.A., Rolfe, J., Meeuwig, J.J. & Pannell, D.J. (2017). Expert judgements and public values: preference heterogeneity for protecting ecology in the Swan River, Western Australia, Working Papers 254025, University of Western Australia, School of Agricultural and Resource Economics. IDEAS page

318 – Measuring impacts from environmental research

There have been some studies considering the relationship between research and environmental policy but studies capturing the impact of research on environmental management, environmental policy, and environmental outcomes are relatively rare. Here is one attempt.

Environmental research may generate benefits in a variety of ways including by providing: information or technology that allows improved management of an environmental issue; information that fosters improved decision-making about priorities for environmental management or policy; or information about an environmental issue that is of intrinsic interest to the community. There are several reasons why it can be worth measuring the impacts of environmental research, including making a case for the funding of environmental research, informing decisions about research priorities, and helping researchers to make decisions about their research that increase its ultimate benefits.

Earlier this year we released the results of an assessment of the engagement and impacts of a particular environmental research centre, the ARC Centre of Excellence for Environmental Decisions (CEED). The assessment includes impacts on policy, management and the community, as well as measures of academic performance, including publications, citations and collaborations. Data were collected in several ways: a survey of all project leaders for the Centre’s 87 projects, the preparation of detailed case studies for selected projects, and collection of statistics on publications, citations and collaborations.

The approach taken was informed by a recent paper of ours called “Policy-oriented environmental research: What is it worth?” (Pannell et al. 2018). The full report is available here.

The Centre’s engagement with end users and stakeholders was strong in Australia and around the world. Researchers reported many examples of engagement with research users involved in policy and management. Results were highly heterogeneous and somewhat skewed, with the majority of observed impact occurring in a minority of the projects.

For almost half of the projects, the potential future increase in impact was assessed as being moderate or high. To some extent, this reflects the time lags involved in research attempting to influence policy and management, but the information was also used to identify projects for which additional engagement effort could be beneficial. The correlation between impact and academic performance was positive but low.

To obtain richer detail about impacts, detailed case studies were prepared for nine research projects. The projects were selected to be diverse, rather than representative. These case studies highlight the unique circumstances faced by each project in endeavouring to have an impact. Each project must be framed within a strong understanding its domain and be deeply engaged with research users if impact is to occur. Substantial benefits for policy or management are apparent in a number of the case studies.

A factor contributing greatly to the impact of CEED was the research communication magazine Decision Point. This publication was widely accepted as a valued communication resource for academic findings in the field of environmental decision sciences, and was rated by people in government and academic institutions as relevant and informative.

Some valuable lessons and implications of the impact analysis are identified in the report. Research impact does not depend only on good relationships, engagement and communication, but also importantly on what research is done. Therefore, embedding a research culture that values impact and considers how it may be achieved before the selection of research projects is potentially important. The role of the Centre leadership team in this is critical. Embedding impact into the culture of a centre likely occurs more effectively if expertise in project evaluation is available internally, either through training or appointments.

A challenge in conducting this analysis was obtaining information related to engagement and impact. There may be merit in institutionalising the collection of impact-related data from early in the life of a new research centre.

Interestingly, we found little relationship between (a) impact from translation and engagement and (b) measures of academic merit. It should not be presumed that the most impactful projects will be those of greatest academic performance.

At the time of the assessment, CEED had generated 848 publications which had been cited 14,996 times according to the Web of Science. CEED publications are disproportionately among the most cited papers in their disciplines. More than a quarter of CEED publications are in the top 10% of the literature, based on their citations. For 39 CEED publications (one in 22), their citations place them in the top 1% of their academic fields in the past 10 years.

There are often long lags between the start of research and delivering the impact — decades in many cases. Therefore, there is a need to allow the longest possible time lag when assessing research impact. On shorter timescales, it may be possible to detect engagement, but not the full impact that will eventually result.

Further reading

Pannell, D.J., Alston, J.M., Jeffrey, S., Buckley, Y.M., Vesk, P., Rhode, J.R., McDonald-Madden, E., Nally, S., Gouche, G. and Thamo, T. (2018). Policy-oriented environmental research: What is it worth? Environmental Science and Policy 86, 64-71. Journal web page

Thamo, T., Harold, T., Polyakov, M. and Pannell, D. (2018). Assessment of Engagement and Impact for the ARC Centre of Excellence for Environmental Decisions, CEED, University of Queensland. http://ceed.edu.au/resources/impact-report.html

279 – Garbage in, garbage out?

As the developer of various decision tools, I’ve lost track of the number of times I’ve heard somebody say, in a grave, authoritative tone, “a model is only as good as the information you feed into it”. Or, more pithily, “garbage in, garbage out”. It’s a truism, of course, but the implications for decision makers may not be quite what you think.

The value of the information generated by a decision tool depends, of course, on the quality of input data used to drive the tool. Usually, the outputs from a decision tool are less valuable when there is poor-quality information about the inputs than when there is good information.

But what should we conclude from that? Does it mean, for example, that if you have poor quality input information you may just as well make decisions in a very simple ad hoc way and not worry about weighing up the decision options in a systematic way? (In other words, is it not worth using a decision tool?) And does it mean that it is more important to put effort into collecting better input data rather than improving the decision process?

No, these things do not follow from having poor input data. Here’s why.

Imagine a manager looking at 100 projects and trying to choose which 10 projects to give money to. Let’s compare a situation where input data quality is excellent with one where it is poor.

decision_aheadFrom simulating hundreds of thousands of decisions like this, I’ve found that systematic decision processes that are consistent with best-practice principles for decision making (see Pannell 2013) do a reasonable job of selecting the best projects even when there are random errors introduced to the input data. On the other hand, simple ad hoc decision processes that ignore the principles often result in very poor decisions, whether the input data is good, bad or indifferent.

Not every decision made using a sound decision process is correct, but overall, on average, they are markedly better than quick-and-dirty decisions. So “garbage in, garbage out” is misleading. If you look across a large number of decisions (which is what you should do), then a better description for a good decision tool could be “garbage in, not-too-bad out”. On the other hand, the most apt description for a poor decision process could be “treasure or garbage in, garbage out”.

An interesting question is, if you are using a good process, why don’t random errors in the input data make a bigger difference to the outcomes of the decisions? Here are some reasons.

Firstly, poorer quality input data only matters if it results in different decisions being made, such as a different set of 10 projects being selected. In practice, over a large number of decisions, the differences caused by input data uncertainty are not as large as you might expect. For example, in the project-selection problem, there are several reasons why data uncertainty may have only a modest impact on which projects are selected:

  • Uncertainty doesn’t mean that the input data for all projects is wildly inaccurate. Some are wildly inaccurate, but some, by chance, are only slightly inaccurate, and some are in between. The good projects with slightly inaccurate data still get selected.
  • Even if the data is moderately or highly inaccurate, it doesn’t necessarily mean that a good project will miss out on funding. Some good projects look worse than they should do as a result of the poor input data, but others are actually favoured by the data inaccuracies, so of course they still get selected. These data errors that reinforce the right decisions are not a problem.
  • Some projects are so outstanding that they still seem worth investing in even when the data used to analyse them is somewhat inaccurate.
  • When ranking projects, there are a number of different variables to consider (e.g. values, behaviour change, risks, etc.). There is likely to be uncertainty about all of these to some extent, but the errors won’t necessarily reinforce each other. In some cases, the estimate of one variable will be too high, while the estimate of another variable will be too low, such that the errors cancel out and the overall assessment of the project is about right.

So input data uncertainty means that some projects that should be selected miss out, but many good projects continue to be selected.

Even where there is a change in project selection, some of the projects that come in are only slightly less beneficial than the ones that go out. Not all, but some.

Putting all that together, inaccuracy in input data only changes the selection of projects for those projects that: happen to have the most highly inaccurate input data; are not favoured by the data inaccuracies; are not amongst the most outstanding projects anyway; and do not have multiple errors that cancel out. Further, the changes in project selection that do occur only matter for the subset of incoming projects that are much worse than the projects they displace. Many of the projects that are mistakenly selected due to poor input data are not all that much worse than the projects they displace. So input data uncertainty is often not such a serious problem for decision making as you might think. As long as the numbers we use are more-or-less reasonable, results from decision making can be pretty good.

To me, the most surprising outcome from my analysis of these issues was the answer to the second question: is it more important to put effort into collecting better input data rather than improving the decision process?

As I noted earlier, the answer seems to be “no”. For the project choice problem I described earlier, the “no” is a very strong one. In fact, I found that if you start with a poor quality decision process, inconsistent with the principles I’ve outlined in Pannell (2013), there is almost no benefit to be gained by improving the quality of input data. I’m sure there are many scientists who would feel extremely uncomfortable with that result, but it does make intuitive sense when you think about it. If a decision process is so poor that its results are only slightly related to the best possible decisions, then of course better information won’t help much.

Further reading

Pannell, D.J. and Gibson, F.L. (2014) Testing metrics to prioritise environmental projects, Australian Agricultural and Resource Economics Society Conference (58th), February 5-7, 2014, Port Macquarie, Australia. Full paper

Pannell, D.J. (2013). Ranking environmental projects, Working Paper 1312, School of Agricultural and Resource Economics, University of Western Australia. Full paper

273 – Behaviour change comes in pairs

Some key factors that drive adoption of new practices come in pairs: one aspect related to the performance of the new practice, and one aspect related to how much people care about that performance. Many models of adoption miss this, including famous ones.

Whatever work or hobbies we do, there are regularly new practices coming along that we are encouraged to adopt: new technologies (e.g. a new iPhone, an auto-steer crop harvester), or different behaviours (e.g. reducing our usage of energy or water, changing the allocation of land to different crops).

The agricultural examples above reflect that some of my research is on adoption of new practices by farmers, but the issue I’m talking about today is relevant in all spheres where people adopt new practices.

It is well recognised that people vary in the personal goals that drive their choices about whether to adopt new practices that are promoted to them. Amongst commercial farmers, for example, there are differences in the emphases they give to profit, risk and environmental outcomes.

Any attempt to understand or model adoption of new practices needs to recognise the potential importance of these different goals. Many studies do include variables representing these three goals, and sometimes others.

However, it is less often recognised that there are two aspects to each of these goals when looking at a new practice:

  1. The extent to which the new practice would deliver the outcome measured by that goal: more profit, less risk, or better environmental outcomes.
  2. How much the decision maker cares about those particular outcomes.

These two aspects are closely linked. They interact to determine how attractive a new practice is, but they are distinctly different. One is not a proxy for the other.

extension 1For example, suppose a farmer is considering two potential new practices for weed control. The farmer judges that new practice A is much riskier (less reliable) than new practice B.

How much will this affect the farmer’s decision making? That depends on the farmer’s attitude to risk. For a farmer who has a strong aversion to risk, practice B will be strongly favoured, at least from the risk perspective. (Other goals will probably also come into play as well.) For a farmer who doesn’t care about risk one way or the other, the difference in riskiness between practices A and B is of no consequence. Some farmers (a minority) have been found to be risk-seeking, so they would prefer practice A.

The same sort of pattern occurs with other goals as well. The attractiveness of a new practice depends on how much difference it makes to profit and on how strongly the farmer is motivated by profit. Or how much it affects the environment and how strongly the farmer cares about the environment.

Amongst the thousands of research studies of farmer adoption of new practices, most represent only one goal-related variable where two are needed. For example, they include a measure of risk aversion, but ignore differences in the level of riskiness of the new practice amongst different adopters. Or they represent differences in the profitability of the new practice, but not differences in how much the adopters care about profit.

It doesn’t help that the issue is not recognised in common conceptual frameworks used by social scientists studying adoption behaviour, such as the Theory of Reasoned Action (Fishbein and Ajzen 1975) and the Theory of Planned Behaviour (Ajzen 1991).

It should be recognised in a sound economics framework (e.g. Abadi Ghadim and Pannell 1999 do so for risk), but it often isn’t included in the actual numerical model that is estimated.

The only framework I’ve seen that really captures this issue properly is our framework for ADOPT – the Adoption and Diffusion Outcome Prediction Tool. Hopefully this insight can diffuse to other researchers over time.

Further reading

Abadi Ghadim, A.K. and Pannell, D.J. (1999). A conceptual framework of adoption of an agricultural innovation, Agricultural Economics 21, 145-154. Journal web page ◊ IDEAS page

Ajzen, I. (1991). The theory of planned behavior, Organizational Behavior and Human Decision Processes 50, 179-211.

Fishbein, M. and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley.