Climate change, Environment, Latest, Natural resource management, Policy, Politics, Research

226 – Modelling versus science

Mick Keogh, from the Australian Farm Institute, recently argued that “much greater caution is required when considering policy responses for issues where the main science available is based on modelled outcomes”. I broadly agree with that conclusion, although there were some points in the article that didn’t gel with me. 

In a recent feature article in Farm Institute Insights, the Institute’s Executive Director Mick Keogh identified increasing reliance on modelling as a problem in policy, particularly policy related to the environment and natural resources. He observed that “there is an increasing reliance on modelling, rather than actual science”. He discussed modelling by the National Land and Water Resources Audit (NLWRA) to predict salinity risk, modelling to establish benchmark river condition for the Murray-Darling Rivers, and modelling to predict future climate. He expressed concern that the modelling was based on inadequate data (salinity, river condition) or used poor methods (salinity) and that the modelling results are “unverifiable” and “not able to be scrutinised” (all three). He claimed that the reliance on modelling rather than “actual science” was contributing to poor policy outcomes.

While I’m fully on Mick’s side regarding the need for policy to be based on the best evidence, I do have some problems with some of his arguments in this article.

Firstly, there is the premise that “science and modelling are not the same”. The reality is nowhere near as black-and-white as that. Modelling of various types is ubiquitous throughout science, including in what might be considered the hard sciences. Every time a scientist conducts a statistical test using hard data, she or he is applying a numerical model. In a sense, all scientific conclusions are based on models.

I think what Mick really has in mind is a particular type of model: a synthesis or integrated model that pulls together data and relationships from a variety of sources (often of varying levels of quality) to make inferences or draw conclusions that cannot be tested by observation, usually because the issue is too complex. This is the type of model I’m often involved in building.

I agree that these models do require particular care, both by the modeller and by decision makers who wish to use results. In my view, integrated modellers are often too confident about the results of a model that they have worked hard to construct. If such models are actually to be used for decision making, it is crucial for integrated modellers to test the robustness of their conclusions (e.g. Pannell, 1997), and to communicate clearly the realistic level of confidence that decision makers can have in the results. In my view, modellers often don’t do this well enough.

But even in cases where they do, policy makers and policy advisors often tend to look for the simple message in model results, and to treat that message as if it was pretty much a fact. The salinity work that Mick criticises is a great example of this. While I agree with Mick that aspects of that work were seriously flawed, the way it was interpreted by policy makers was not consistent with caveats provided by the modellers. In particular, the report was widely interpreted as predicting that there would be 17 million hectares of salinity, whereas it actually said that there would be 17 million hectares with high “risk” or “hazard” of going saline. Of that area, only a proportion was ever expected to actually go saline. That proportion was never stated, but the researchers knew that the final result would be much less than 17 million. They probably should have been clearer and more explicit about that, but it wasn’t a secret.

The next concern expressed in the article was that models “are often not able to be scrutinised to the same extent as ‘normal’ science”. It’s not clear to me exactly what this means. Perhaps it means that the models are not available for others to scrutinise. To the extent that that’s true (and it is true sometimes), I agree that this is a serious problem. I’ve built and used enough models to know how easy it is for them to contain serious undetected bugs. For that reason, I think that when a model is used (or is expected to be used) in policy, the model should be freely available for others to check. It should be a requirement that all model code and data used in policy is made publicly available. If the modeller is not prepared to make it public, the results should not be used. Without this, we can’t have confidence that the information being used to drive decisions is reliable.

Once the model is made available, if the issue is important enough, somebody will check it, and any flaws can be discovered. Or if the time frame for decision making is too tight for that, government may need to commission its own checking process.

This requirement would cause  concerns to some scientists. In climate science, for example, some scientists have actively fought  requests for data and code. (Personally, I think the same requirement should be enforced for peer-reviewed publications, not just for work that contributes to policy. Some leading economics journals do this, but not many in other disciplines.)

Perhaps, instead, Mick intends to say that even if you can get your hands on a model, it is too hard to check. If that is what he means, I disagree. I don’t think checking models generally is harder than checking other types of research. In some ways it is easier, because you should be able to replicate the results exactly.

Then there is the claim that poor modelling is causing poor policy. Of course, that can happen, and probably has happened. But I wouldn’t overstate how great a problem this is at the moment, because model results are only one factor influencing policy decisions, and they often have a relatively minor influence.

Again, the salinity example is illustrative. Mick says that the faulty predictions of salinity extent were “used to allocate funding under the NAP”. Apparently they influenced decisions about which regions would qualify for funding from the salinity program. However, in my judgement, they had no bearing on how much funding each of the 22 eligible regions actually received. That depended mainly on how much and how quickly each state was prepared to allocate new money to match the available Federal money, coupled with a desire to make sure that no region or state missed out on an “equitable” share (irrespective of their salinity threat).

The NLWRA also reported that dryland salinity is often a highly intractable problem. Modelling indicated that, in most locations, a very large proportion of the landscape area would need to be planted to perennials to get salinity under control. This was actually even more important information than the predicted extent of salinity because it ran counter to the entire philosophy of the NAP, of spreading the available money thinly across numerous small projects. But this information, from the same report, was completely ignored by policy makers. The main cause of the failure of the national salinity policy was not that it embraced dodgy modelling about the future extent of salinity, but that it ignored much more soundly based modelling that showed that the strategy of the policy was fundamentally flawed.

Mick proposes that “Modellers may not necessarily be purely objective, and “rent seeking” can be just as prevalent in the science community as it is in the wider community.” The first part of that sentence definitely is true. The last part definitely is not. Yes, there are rent-seeking scientists, but most scientists are influenced to a greater-or-lesser extent by the explicit culture of honesty and commitment to knowledge that underpins science. The suggestion that, as a group, scientists are just as self-serving in their dealings with policy as other groups that lack this explicit culture is going too far.

Nevertheless, despite those points of disagreement, I do agree with Mick’s bottom line that “Governments need to adopt a more sceptical attitude to modelling ‘science’ in formulating future environmental policies”. This is not just about policy makers being alert to dodgy modellers. It’s also about policy makers using information wisely. The perceived need for a clear, simple answer for policy sometimes drives modellers to express results in a way that portrays a level of certainty that they do not deserve. Policy makers should be more accepting that the real world is messy and uncertain, and engage with modellers to help them grapple with that messiness and uncertainty.

Having said this, I’m not optimistic that it will actually happen. There are too many things stacked against it.

Perhaps one positive thing that could conceivably happen is adoption of Mick’s recommendation that “Governments should consider the establishment of truly independent review processes in such instances, and adopt iterative policy responses which can be adjusted as the science and associated models are improved.” You would want to choose carefully the cases when you commissioned a review, but there are cases when it would be a good idea.

Some scientists would probably argue that there is no need for this because their research has been through a process of “peer reviewed” before publication. However, I am of the view that peer review is not a sufficient level of scrutiny for research that is going to be used as the basis for large policy decisions. In most cases, peer review provides a very cursory level of scrutiny. For big policy decisions, it would be a good idea for key modelling results to be independently audited, replicated and evaluated.

Further reading

Keogh, M. (2012). Has modelling replaced science? Farm Institute Insights 9(3), 1-5.

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16(2): 139-152. Full paper here ♦ IDEAS page for this paper

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics54(4): 437-456. Journal web site here ♦ IDEAS page for this paper