Category Archives: Research

259 – Increasing environmental benefits

It is obvious that the budgets of our public environmental programs are small relative to the cost of fixing all of our environmental problems. If we want to achieve greater environmental benefits from our public investments, what, in broad terms, are the options?

I remember seeing a graph last year – I think it was from the Australian Bureau of Statistics – showing the level of concern felt by the Australian community about environmental issues. It looked to have peaked a few years ago, and was pretty flat, or slightly declining. In that context, the prospects for a big increase in environmental spending over time don’t look good, particularly given the general tightness of government budgets. So I was wondering, if we wanted to double the environmental values protected or enhanced by our public programs, what are the options? I was able to identify several. I’ll list them here, and briefly comment on their potential effectiveness, cost and political feasibility.

  1. Double the budget. Effectiveness: high (in the sense that we could actually double the environmental benefits generated). Cost: high. Politics: very unlikely in the foreseeable future. It wouldn’t be my first priority, anyway. Increasing the budget would be more effective if we first delivered some of the strategies below.
  2. Improve the prioritisation of environmental investments. Improve the usage of evidence, the quality of decision metrics (Pannell 2013), and the quality of evaluation of proposals. Effectiveness: high (because most programs currently have major deficiencies in these areas). Cost: low, especially relative to doubling the budget. Politics: Implies a higher degree of selectivity, which some stakeholders dislike. Probably means funding fewer, larger projects. Achievable for part of the budget but the politics probably require a proportion to be spent along traditional lines (relatively unprioritised).
  3. murray_riverEncourage more voluntary pro-environmental action through education, persuasion, peer pressure and the like. Effectiveness: commonly low, moderate in some cases. Cost: moderate. Politics: favourable.
  4. Increase the share of environmental funds invested in research and development to create pro-environmental technologies (Pannell 2009). Note that this is about creation of new technologies, rather than information. Examples could include more effective baits for feral cats, new types of trees that are commercially viable in areas threatened by dryland salinity, or new renewable energy technologies. Feasibility: case-specific – high in some cases, low in others. Cost: moderate. Politics: requires a degree of patience which can be politically problematic. Also may conflict with community desire to spend resources directly on on-ground works (even if the existing technologies are not suitable). There tends to be a preference for research funding to come from the research budget rather than the environment budget, although this likely means that it is not as well targeted to solve the most important environmental problems.
  5. Improve the design of environmental projects and programs. Improve evidence basis for identifying required actions. Improve selection of delivery mechanisms. Improve the logical consistency of projects. Effectiveness: high (because a lot of existing projects are not well founded on evidence, and/or don’t use appropriate delivery mechanisms, and/or are lacking in internal logical consistency). Cost: low. Politics: Implies changes in the way that projects are developed, with longer lead times, which may not be popular. There may be a perception of high transaction costs from this strategy (although they would be low relative to the benefits) (Pannell et al. 2013).
  6. Increase the emphasis on learning and using better information. Strategies include greater use of detailed feasibility studies, improved outcome-oriented monitoring, and active adaptive management. Effectiveness: moderate to high. Would feed into, and further improve, options 2 and 5. Cost: low. Politics: main barrier is political impatience, and a view that decisions based on judgement are sufficient even in the absence of good information. Often that view is supported/excused by an argument that action cannot and should not wait (which is a reasonable argument in certain cases, but usually is not).
  7. Reform inefficient and environmentally damaging policies and programs. Examples include subsidies for fossil fuels, badly designed policies supporting biofuels in Europe and in the USA, and agricultural subsidies. This strategy is quite unlike the other strategies discussed here, but it has enormous potential to generate environmental benefits in countries that have these types of policies. Successful reform would be not just costless, but cost-saving. Effectiveness: very high in particular cases. Cost: negative. Politics: difficult to very difficult. People with a vested interest in existing policies fight hard to retain them. Environmental agencies don’t tend to fight for this, but there could be great benefits if they did.

In my judgement, for Australia, the top priorities should be strategies 2 and 5 followed by 6. Strategy 4 has good potential in certain cases. If these four strategies were delivered, the case for strategy 1 would be greatly increased (once the politics made that feasible). To succeed, strategies 2, 5 and 6 would need an investment in training and expert support within environmental organisations. Over time, in those environmental organisations that don’t already perform well in relation to strategies 2, 5 and 6 (i.e. most of them), there may be a need for cultural change, which requires leadership and patience.

In Europe and the USA, my first choice would be strategy 7, if it was politically feasible. After that, 2, 5, 6 and 4 again.

Further Reading

Garrick, D., McCann, L., Pannell, D.J. (2013). Transaction costs and environmental policy: Taking stock, looking forward, Ecological Economics 88, 182-184. Journal web site

Pannell, D.J., Roberts, A.M., Park, G. and Alexander, J. (2013). Improving environmental decisions: a transaction-costs story, Ecological Economics 88, 244-252. Journal web siteIDEAS page

Pannell, D.J. (2009). Technology change as a policy response to promote changes in land management for environmental benefits, Agricultural Economics 40(1), 95-102. Journal web page ◊ Prepublication version

Pannell, D.J. (2013). Ranking environmental projects, Working Paper 1312, School of Agricultural and Resource Economics, University of Western Australia. IDEAS page ◊ Blog series

234 – The benefits of environmental research

There has been a lot of research on the benefits of research, but little of it has addressed environmental research. In some ways, this is understandable, as it’s difficult. But we need to develop better ways to estimate these benefits as researchers are increasingly asked to justify their funding and quantify their impacts.

I organised a small workshop in Brisbane a few weeks ago on estimating the benefits of environmental research. If we could generate this information, it would be useful in several ways. It could be used to make judgements about whether particular research projects are worth doing, to identify priorities from a set of potential projects, and to make the case for continued funding of environmental research. Also, the process of working out the likely benefits could help us understand the ways that research generates benefits, and that might help us to do a better job of generating benefits.

However, as we quickly agreed at the workshop, this is a very difficult thing to do well. For one thing, there are so many different types of environmental research with different possible uses and impacts, and some of them need different thinking and approaches to analysis.

We decided to focus our attention onto the type of research that is least well served by existing tools and frameworks: research that is intended to influence environmental policy. It turns out that this is the most neglected aspect for a reason – it’s the most difficult one to deal with.

You can see why it’s difficult from the following list of stages that one must go through, starting from research and ending up with real-world benefits.

  • Funding is allocated to research and research is done
  • Something useful is learned – new information is generated (or isn’t)
  • The new information influences policy/management (or doesn’t)
  • Policy change is implemented by policy makers (or isn’t)
  • If the purpose of the policy is to change the behavior of people or businesses, these people respond to the changed policy (or don’t)
  • Changes (relative to no research) result in the environment (or not), including unexpected or unintended consequences

To estimate benefits, we need to estimate what happened (or predict what will happen) at each of these stages. If one link in the chain breaks, benefits are not generated. We also need to estimate (or predict) what would have happened in the absence the research – something you can’t actually observe even if the research has been completed and had its impacts.

Research that aims to influence policy is particularly difficult to assess, because the process of policy change is so complex and influenced by numerous factors. It is very difficult to judge what proportion of any particular change may be attributable to the research rather than other factors. This is recognised in the literature as the attribution problem.

Despite all the difficulties, we found that the existing frameworks for research evaluation provided enough of a platform for us to think productively about what we would do for this type of research. A team of us will be working on this challenge over the next while. We aim to work out what would be needed for a comprehensive rigorous framework, and from that produce a set of principles and perhaps rules of thumb that researchers, research funders and policy makers can use when they need to think about the benefits of policy-oriented environmental research.

Further reading

van der Most, F. (2010). Use and non-use of research evaluation: A literature review, Paper no. 2010/16, Circle, Lund University, Sweden. Here

233 – Journal refereeing

Peer review of research is a key mechanism for quality control used in science. Unfortunately, some reviewers (or referees) perform their task in a hard and heartless way.

Back in 2002 I published a poem about this in a refereed journal article. I’m pretty pleased with this – you don’t see many poems in refereed journals. This week, somebody told me that my poem had been included (with praise such as, “a beautiful piece of work”) on a web page of econometric poetry. I then did a search and, apart from finding the original paper, I found it reproduced on three other pages (here, here, here), and referred to on several more. Isn’t the web marvelous?

In case you haven’t seen it, here it is.

I’m The Referee
David J. Pannell

You’ve posted in your paper
To a journal of repute
And you’re hoping that the referees
Won’t send you down the chute

You’d better not build up a sense of
False security
I’ve just received your manuscript and
I’m the referee

This power’s a revelation
I’m so glad it’s come to me
I can be a total bastard with
Complete impunity

I used to be a psychopath
But never more will be
I can deal with my frustrations now that
I’m a referee

 

The poem is therapeutic, as was the paper it was published in (Pannell, 2002), so if you’ve suffered at the hands of referees, you might want to read that too.

Further reading

Pannell, D.J. (2002). Prose, psychopaths and persistence: personal perspectives on publishing. Canadian Journal of Agricultural Economics 50(2), 101–116. Here ♦ IDEAS page for this paper

228 – Majority opinion

This week I saw a senior bureaucrat try to counter dissenting views on a government report by arguing that the great majority of people agreed with it. This is a highly flawed argument.

I was at a public seminar this week, at which the leader of a government inquiry was outlining the findings from the resulting report. He made the observation that the great majority of people seemed to agree with the findings, but that there was a small but vocal minority who did not.

Most of the audience seemed to be broadly on side with the speaker, bearing out his claim, but at least one took a very different position. During question time she made a vehement statement that amounted to a denunciation of the entire report.

The speaker responded in kind. He said he thought she probably hadn’t read the report. When she reacted angrily to that, he asserted, “then you didn’t understand it” and glared at her. Later he once again referred to “the small but vocal minority”, this time adding, “who should be ignored”.

It had been quite an aggressive comment, so perhaps the aggressive response was fair enough in a way, but the result was to close off debate. I guess that’s what the speaker wanted but I don’t think it was appropriate, and it didn’t go down well with some of the audience.

While the exchange of fire was exciting, the main thing I’m going to focus on is the speaker’s implication that the views of a group of people must be wrong and should be ignored because they are the views of a small minority. That’s a terrible argument. If evidence and logic is with the minority, it doesn’t matter how few in number they are. As Einstein said in response to a Nazi pamphlet titled 100 authors against Einstein, “If I had been wrong, one would have been enough”.

I’m not saying that the commenter was right. I haven’t read the report in question, so I can’t judge. All I’m saying is that the speaker was wrong to point to the number of people who agreed with the report as an indication that it was sound.

I have personal experience of being in an absolutely miniscule minority on a controversial issue, but eventually being accepted as the one who was correct. The issue was dryland salinity in Australia. My analysis in the early 2000s, drawing together the hydrogeology, economics and sociology of salinity, led me to conclude that the existing policy emphasis on integrated catchment management was misguided in most cases (Pannell, 2001a, 2001b).

Also, salinity management recommendations at the time emphasised the need for all farmers in a catchment to cooperate due to their supposed hydrological inter-dependence. I concluded that this too was misguided, initially for Western Australia (Pannell et al., 2001) and later more generally (Beverley et al., 2012).

For a long time I was the only one who was publicly putting these view, which ran counter to the way that almost everybody was thinking about the problem. An implication of my conclusions was that many millions of dollars were being wasted in public programs to fight salinity.

When I tried to put my position in conferences and meetings, I often generated strong negative reactions, including derision and anger. Many people working in the area said that I didn’t know what I was talking about and rejected my arguments out of hand. One notable incident involved a very public tirade of abuse from a fellow economist following my Presidential Address to the Australian Agricultural and Resource Economics Society in 2001.

Over time, the weight of evidence supporting my position got stronger and stronger (e.g. Barrett-Lennard et al., 2005), and I won people over, or maybe just wore them down. By the time of the Second International Salinity Forum in Adelaide in 2008, my views that had seemed heretical to many in 2001 had become the new orthodoxy.

The point is, it doesn’t matter how small the minority is, they might be right. Logic and evidence is what matters, not weight of numbers.

In fact, whenever there is a really important advance in knowledge that overturns a previous misconception, by definition, the person with the new insight is initially in the small minority.

The salinity experience has made me particularly attuned to the possibility that the majority can be wrong. As a result, I worry about the heavy reliance on scientific consensus as an argument in the climate debate. The consensus might well be right, but the fact that there is a consensus is not, in itself, an argument that should be convincing. There was an almost unanimous consensus about these two aspects of salinity management and policy that turned out to be wrong.

Further reading

Barrett-Lennard, E.G., George, R.J., Hamilton, G., Norman, H.C., Masters, D.G. (2005). Multi-disciplinary approaches suggest profitable and sustainable farming systems for valley floors at risk of salinity, Australian Journal of Experimental Agriculture 45: 1415–1424. Journal web site here

Beverly, C., Roberts, A., Hocking, M., Pannell, D. and Dyson, P. (2011). Using linked surface-groundwater catchment modelling to assess protection options for environmental assets threatened by dryland salinity in southern-eastern Australia, Journal of Hydrology 410: 13-30. Journal web site here

Pannell, D.J. (2001a). Salinity policy: A tale of fallacies, misconceptions and hidden assumptions, Agricultural Science 14(1): 35-37. Here

Pannell, D.J. (2001b). Dryland Salinity: Economic, Scientific, Social and Policy Dimensions, Australian Journal of Agricultural and Resource Economics 45(4): 517-546. Journal web site here ♦ IDEAS page for this paper

Pannell, D.J., McFarlane, D.J. and Ferdowsian, R. (2001). Rethinking the externality issue for dryland salinity in Western Australia, Australian Journal of Agricultural and Resource Economics 45(3): 459-475. Journal web site here ♦ IDEAS page for this paper

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics54(4): 437-456. Journal web site here ♦ IDEAS page for this paper

226 – Modelling versus science

Mick Keogh, from the Australian Farm Institute, recently argued that “much greater caution is required when considering policy responses for issues where the main science available is based on modelled outcomes”. I broadly agree with that conclusion, although there were some points in the article that didn’t gel with me. 

In a recent feature article in Farm Institute Insights, the Institute’s Executive Director Mick Keogh identified increasing reliance on modelling as a problem in policy, particularly policy related to the environment and natural resources. He observed that “there is an increasing reliance on modelling, rather than actual science”. He discussed modelling by the National Land and Water Resources Audit (NLWRA) to predict salinity risk, modelling to establish benchmark river condition for the Murray-Darling Rivers, and modelling to predict future climate. He expressed concern that the modelling was based on inadequate data (salinity, river condition) or used poor methods (salinity) and that the modelling results are “unverifiable” and “not able to be scrutinised” (all three). He claimed that the reliance on modelling rather than “actual science” was contributing to poor policy outcomes.

While I’m fully on Mick’s side regarding the need for policy to be based on the best evidence, I do have some problems with some of his arguments in this article.

Firstly, there is the premise that “science and modelling are not the same”. The reality is nowhere near as black-and-white as that. Modelling of various types is ubiquitous throughout science, including in what might be considered the hard sciences. Every time a scientist conducts a statistical test using hard data, she or he is applying a numerical model. In a sense, all scientific conclusions are based on models.

I think what Mick really has in mind is a particular type of model: a synthesis or integrated model that pulls together data and relationships from a variety of sources (often of varying levels of quality) to make inferences or draw conclusions that cannot be tested by observation, usually because the issue is too complex. This is the type of model I’m often involved in building.

I agree that these models do require particular care, both by the modeller and by decision makers who wish to use results. In my view, integrated modellers are often too confident about the results of a model that they have worked hard to construct. If such models are actually to be used for decision making, it is crucial for integrated modellers to test the robustness of their conclusions (e.g. Pannell, 1997), and to communicate clearly the realistic level of confidence that decision makers can have in the results. In my view, modellers often don’t do this well enough.

But even in cases where they do, policy makers and policy advisors often tend to look for the simple message in model results, and to treat that message as if it was pretty much a fact. The salinity work that Mick criticises is a great example of this. While I agree with Mick that aspects of that work were seriously flawed, the way it was interpreted by policy makers was not consistent with caveats provided by the modellers. In particular, the report was widely interpreted as predicting that there would be 17 million hectares of salinity, whereas it actually said that there would be 17 million hectares with high “risk” or “hazard” of going saline. Of that area, only a proportion was ever expected to actually go saline. That proportion was never stated, but the researchers knew that the final result would be much less than 17 million. They probably should have been clearer and more explicit about that, but it wasn’t a secret.

The next concern expressed in the article was that models “are often not able to be scrutinised to the same extent as ‘normal’ science”. It’s not clear to me exactly what this means. Perhaps it means that the models are not available for others to scrutinise. To the extent that that’s true (and it is true sometimes), I agree that this is a serious problem. I’ve built and used enough models to know how easy it is for them to contain serious undetected bugs. For that reason, I think that when a model is used (or is expected to be used) in policy, the model should be freely available for others to check. It should be a requirement that all model code and data used in policy is made publicly available. If the modeller is not prepared to make it public, the results should not be used. Without this, we can’t have confidence that the information being used to drive decisions is reliable.

Once the model is made available, if the issue is important enough, somebody will check it, and any flaws can be discovered. Or if the time frame for decision making is too tight for that, government may need to commission its own checking process.

This requirement would cause  concerns to some scientists. In climate science, for example, some scientists have actively fought  requests for data and code. (Personally, I think the same requirement should be enforced for peer-reviewed publications, not just for work that contributes to policy. Some leading economics journals do this, but not many in other disciplines.)

Perhaps, instead, Mick intends to say that even if you can get your hands on a model, it is too hard to check. If that is what he means, I disagree. I don’t think checking models generally is harder than checking other types of research. In some ways it is easier, because you should be able to replicate the results exactly.

Then there is the claim that poor modelling is causing poor policy. Of course, that can happen, and probably has happened. But I wouldn’t overstate how great a problem this is at the moment, because model results are only one factor influencing policy decisions, and they often have a relatively minor influence.

Again, the salinity example is illustrative. Mick says that the faulty predictions of salinity extent were “used to allocate funding under the NAP”. Apparently they influenced decisions about which regions would qualify for funding from the salinity program. However, in my judgement, they had no bearing on how much funding each of the 22 eligible regions actually received. That depended mainly on how much and how quickly each state was prepared to allocate new money to match the available Federal money, coupled with a desire to make sure that no region or state missed out on an “equitable” share (irrespective of their salinity threat).

The NLWRA also reported that dryland salinity is often a highly intractable problem. Modelling indicated that, in most locations, a very large proportion of the landscape area would need to be planted to perennials to get salinity under control. This was actually even more important information than the predicted extent of salinity because it ran counter to the entire philosophy of the NAP, of spreading the available money thinly across numerous small projects. But this information, from the same report, was completely ignored by policy makers. The main cause of the failure of the national salinity policy was not that it embraced dodgy modelling about the future extent of salinity, but that it ignored much more soundly based modelling that showed that the strategy of the policy was fundamentally flawed.

Mick proposes that “Modellers may not necessarily be purely objective, and “rent seeking” can be just as prevalent in the science community as it is in the wider community.” The first part of that sentence definitely is true. The last part definitely is not. Yes, there are rent-seeking scientists, but most scientists are influenced to a greater-or-lesser extent by the explicit culture of honesty and commitment to knowledge that underpins science. The suggestion that, as a group, scientists are just as self-serving in their dealings with policy as other groups that lack this explicit culture is going too far.

Nevertheless, despite those points of disagreement, I do agree with Mick’s bottom line that “Governments need to adopt a more sceptical attitude to modelling ‘science’ in formulating future environmental policies”. This is not just about policy makers being alert to dodgy modellers. It’s also about policy makers using information wisely. The perceived need for a clear, simple answer for policy sometimes drives modellers to express results in a way that portrays a level of certainty that they do not deserve. Policy makers should be more accepting that the real world is messy and uncertain, and engage with modellers to help them grapple with that messiness and uncertainty.

Having said this, I’m not optimistic that it will actually happen. There are too many things stacked against it.

Perhaps one positive thing that could conceivably happen is adoption of Mick’s recommendation that “Governments should consider the establishment of truly independent review processes in such instances, and adopt iterative policy responses which can be adjusted as the science and associated models are improved.” You would want to choose carefully the cases when you commissioned a review, but there are cases when it would be a good idea.

Some scientists would probably argue that there is no need for this because their research has been through a process of “peer reviewed” before publication. However, I am of the view that peer review is not a sufficient level of scrutiny for research that is going to be used as the basis for large policy decisions. In most cases, peer review provides a very cursory level of scrutiny. For big policy decisions, it would be a good idea for key modelling results to be independently audited, replicated and evaluated.

Further reading

Keogh, M. (2012). Has modelling replaced science? Farm Institute Insights 9(3), 1-5.

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16(2): 139-152. Full paper here ♦ IDEAS page for this paper

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics54(4): 437-456. Journal web site here ♦ IDEAS page for this paper