280 – Lomborg at UWA

The news of Bjorn Lomborg establishing the “Australian Consensus Centre” at the University of Western Australia has generated plenty of media attention and much discussion within the University.

Some people within UWA are concerned about the University becoming associated with such a controversial and divisive figure. They are worried about the University’s reputation, and about the perception that his work is scientifically flawed.

There has also been commentary on the fact that the Australian Government could find $4 million for this initiative at a time when government funding in general (and university funding in particular) is under such great pressure.

I had no idea that the UWA arrangement was in prospect until Lomborg dropped in to meet me briefly the day before it was announced a couple of weeks ago. I had not had any contact with him in the past. I found him to be very personable and he asked sensible and genuine questions about environmental issues in Australia.

I have been aware of Lomborg’s work since his 2001 book The Skeptical Environmentalist (TSE), with its message that many environmental problems are not as bad as we’d been led to believe, and some are getting better. I’ve also been interested in his writings on climate policy, and his more-recent initiative, the Copenhagen Consensus, which sets out to prioritise a set of major international policies. The latter will be adapted for a new set of policies in the UWA initiative.

tseAll three of these areas of work have generated controversy and criticism. I’ve read many of the critics, particularly in the early days of TSE when the criticism was raging.

I split the criticisms into two types: identification of errors of fact and criticisms of what he does with the facts (interpretations and judgements). There were some errors of fact in TSE, but not as many as claimed. In my judgement, many of the claimed errors were misinterpretations, misunderstandings or misrepresentations by the critics, quoting him out of context, highlighting trivial issues, and so on. People who didn’t like his conclusions went out of their way to find the smallest hint of an error and blow it up. There is a web site called Lomborg Errors, which includes numerous so-called errors from TSE, but when I read it I was singularly unconvinced by many of them. In fact I found myself laughing out loud at some of them. Given the huge scope of the book, the number of significant, genuine errors is remarkable small, really, and they don’t change the general message of the book. But the myth of there being numerous serious errors got well established, and is accepted as received wisdom by many.

I didn’t agree with everything in TSE. Some parts were less convincing, and it seemed too optimistic to me in some respects. These were generally not errors of fact, but differences in judgement about what they implied or what should be done about them.

Lomborg has faced plenty of disagreement about his policy recommendations, particularly in relation to climate change. He argues that the political barriers to pricing carbon at a price level that will achieve the desired outcomes are so great that we may as well not bother with it. Instead he advocates a large public investment in development of new technologies, such as for renewable energy. This position is obviously at odds with most people who are concerned about climate change, but my own view is that his pessimism about the politics is justified (reinforced by the messages coming out of India recently) and that the technology route is likely to be the only approach with any real chance of averting serious climate change. I’ve written about this here.  Interestingly, his position is not that of a climate sceptic/denier, although he is sometimes characterised as being one.

Looking around the web, I see some scientists arguing that climate change will be greater and more costly than Lomborg has concluded in his climate book, often coupled with attribution of dubious motivations and associations. Perhaps he has made errors here and underplayed some potential outcomes – I haven’t taken the time to evaluate the claims. Nevertheless, even if he has, it doesn’t affect the logic behind his recommended policy approach.

The Copenhagen Consensus work, a version of which he will bring to UWA, is somewhat different in nature. His contribution is to set up and manage the process, bring people together and publicise the results. The judgements made in the process are not his judgements, but those of panels of people (usually senior economists) responding to evidence and cases put by commissioned experts. The focus is on identifying priorities for policy action. From a set of defined policies, which are the ones that are likely to have the greatest benefits for mankind? The explicit focus on prioritisation is critical, but is often missed by people advocating for a particular policy.

The controversy here arises because carbon-pricing policies consistently come out as being much lower in priority than other things like improving childhood nutrition in developing countries and fighting infectious diseases. In my view, this result isn’t a surprise, considering the likely benefits, feasibility, time lags and costs of the options. But it adds to the impression that Lomborg is a climate “contrarian”, even though the results are not actually generated by him.

Some have argued that the concept of prioritising these policies is wrong – we should just implement them all. I think that’s very naïve. It’s not how the world works. None of the policies being evaluated is currently in place. It’s a huge, difficult, risky task to try to get a major new policy adopted, especially when international agreements are needed. Governments have to carefully prioritise how to spend their financial resources and their political capital.

It’s very interesting that Vice Chancellor Paul Johnson has signed up to the University hosting this new centre. He must have anticipated that there would be controversy. I think it’s positive that the University hasn’t been scared off. A university is a good place to do work that challenges people to think differently.

Overall, if it can sufficiently avoid the taint of politics (which might be tricky), I think the initiative could make a worthwhile and interesting contribution to the policy debate in Australia. But also there will no doubt be aspersions cast against Lomborg and UWA.

279 – Garbage in, garbage out?

As the developer of various decision tools, I’ve lost track of the number of times I’ve heard somebody say, in a grave, authoritative tone, “a model is only as good as the information you feed into it”. Or, more pithily, “garbage in, garbage out”. It’s a truism, of course, but the implications for decision makers may not be quite what you think.

The value of the information generated by a decision tool depends, of course, on the quality of input data used to drive the tool. Usually, the outputs from a decision tool are less valuable when there is poor-quality information about the inputs than when there is good information.

But what should we conclude from that? Does it mean, for example, that if you have poor quality input information you may just as well make decisions in a very simple ad hoc way and not worry about weighing up the decision options in a systematic way? (In other words, is it not worth using a decision tool?) And does it mean that it is more important to put effort into collecting better input data rather than improving the decision process?

No, these things do not follow from having poor input data. Here’s why.

Imagine a manager looking at 100 projects and trying to choose which 10 projects to give money to. Let’s compare a situation where input data quality is excellent with one where it is poor.

decision_aheadFrom simulating hundreds of thousands of decisions like this, I’ve found that systematic decision processes that are consistent with best-practice principles for decision making (see Pannell 2013) do a reasonable job of selecting the best projects even when there are random errors introduced to the input data. On the other hand, simple ad hoc decision processes that ignore the principles often result in very poor decisions, whether the input data is good, bad or indifferent.

Not every decision made using a sound decision process is correct, but overall, on average, they are markedly better than quick-and-dirty decisions. So “garbage in, garbage out” is misleading. If you look across a large number of decisions (which is what you should do), then a better description for a good decision tool could be “garbage in, not-too-bad out”. On the other hand, the most apt description for a poor decision process could be “treasure or garbage in, garbage out”.

An interesting question is, if you are using a good process, why don’t random errors in the input data make a bigger difference to the outcomes of the decisions? Here are some reasons.

Firstly, poorer quality input data only matters if it results in different decisions being made, such as a different set of 10 projects being selected. In practice, over a large number of decisions, the differences caused by input data uncertainty are not as large as you might expect. For example, in the project-selection problem, there are several reasons why data uncertainty may have only a modest impact on which projects are selected:

  • Uncertainty doesn’t mean that the input data for all projects is wildly inaccurate. Some are wildly inaccurate, but some, by chance, are only slightly inaccurate, and some are in between. The good projects with slightly inaccurate data still get selected.
  • Even if the data is moderately or highly inaccurate, it doesn’t necessarily mean that a good project will miss out on funding. Some good projects look worse than they should do as a result of the poor input data, but others are actually favoured by the data inaccuracies, so of course they still get selected. These data errors that reinforce the right decisions are not a problem.
  • Some projects are so outstanding that they still seem worth investing in even when the data used to analyse them is somewhat inaccurate.
  • When ranking projects, there are a number of different variables to consider (e.g. values, behaviour change, risks, etc.). There is likely to be uncertainty about all of these to some extent, but the errors won’t necessarily reinforce each other. In some cases, the estimate of one variable will be too high, while the estimate of another variable will be too low, such that the errors cancel out and the overall assessment of the project is about right.

So input data uncertainty means that some projects that should be selected miss out, but many good projects continue to be selected.

Even where there is a change in project selection, some of the projects that come in are only slightly less beneficial than the ones that go out. Not all, but some.

Putting all that together, inaccuracy in input data only changes the selection of projects for those projects that: happen to have the most highly inaccurate input data; are not favoured by the data inaccuracies; are not amongst the most outstanding projects anyway; and do not have multiple errors that cancel out. Further, the changes in project selection that do occur only matter for the subset of incoming projects that are much worse than the projects they displace. Many of the projects that are mistakenly selected due to poor input data are not all that much worse than the projects they displace. So input data uncertainty is often not such a serious problem for decision making as you might think. As long as the numbers we use are more-or-less reasonable, results from decision making can be pretty good.

To me, the most surprising outcome from my analysis of these issues was the answer to the second question: is it more important to put effort into collecting better input data rather than improving the decision process?

As I noted earlier, the answer seems to be “no”. For the project choice problem I described earlier, the “no” is a very strong one. In fact, I found that if you start with a poor quality decision process, inconsistent with the principles I’ve outlined in Pannell (2013), there is almost no benefit to be gained by improving the quality of input data. I’m sure there are many scientists who would feel extremely uncomfortable with that result, but it does make intuitive sense when you think about it. If a decision process is so poor that its results are only slightly related to the best possible decisions, then of course better information won’t help much.

Further reading

Pannell, D.J. and Gibson, F.L. (2014) Testing metrics to prioritise environmental projects, Australian Agricultural and Resource Economics Society Conference (58th), February 5-7, 2014, Port Macquarie, Australia. Full paper

Pannell, D.J. (2013). Ranking environmental projects, Working Paper 1312, School of Agricultural and Resource Economics, University of Western Australia. Full paper