256 – Attitudes to monetising environmental values

Putting dollar values on intangible environmental outcomes is probably the most controversial aspect of environmental economics. Is that why this information is not used more by environmental managers and decision makers? It turns out that the answer is no.

 In an earlier Pannell Discussion (PD175), I observed that people in environmental organisations and agencies are sometimes resistant to using information from “non-market valuation” research on the monetary-equivalent values of environmental benefits, especially the more intangible benefits. Despite a lot of research, and quite concerted promotion of the approaches by some researchers, they remain under-utilised. I suggested a list of 12 reasons why some people who should be interested in this information don’t use it.

Subsequently, a team of us decided to investigate which of these reasons were most important in reality. Some of the results were quite surprising.

We surveyed Australian researchers who use these techniques and interviewed Australian environmental managers and policy makers who could benefit from using the results. (For shorthand I’ll refer to environmental managers and policy makers as ‘decision makers’).

From both groups, we confirmed our initial perception that monetised environmental values are not used by decision makers very commonly. Where they are used, it is mainly to justify an established position or decision. They are almost never used to inform decision making.

We asked both groups why they thought this was. Interestingly, the perceptions of the two groups were strikingly different.

The researchers thought that the main reasons were (1) that decision makers have concerns about the limitations or validity of non-market valuation techniques, or (2) that decision makers have philosophical objections to assigning monetary values to the environment.

bushwalkOn the other hand, from the interviews with decision makers we learnt that the main reason for non-use was lack of awareness or knowledge. Most could not name a single non-market valuation technique, and only about a third had ever been exposed to valuation results in the course of making environmental decisions. Overall, the level of awareness was much too low for concerns about validity to be a significant factor. You can’t have concerns about something you don’t know about!

Another important issue raised by decision makers was lack of time and resources. The concern with time reflects that management and policy decisions are often made with unseemly haste and without undertaking rigorous analysis. They shouldn’t be, but they are, so the time and resources needed to seek out this sort of information just aren’t available.

A third issue that stood out was a general opposition to the use of economic studies by some people, rather than opposition to non-market values in particular. Here is a quote that highlights the problem: “People within the environment agency, and that’s quite senior people, just laugh at us when we say we could use economics to advise on these things. When they laugh, they actually do laugh.

The issues identified by researchers were mentioned by some decision makers, but they were far from being the main explanations for under-use.

In their focus on validity, it seems like many of the researchers were projecting their experiences in the research world onto environmental decision makers, without realising how different the two worlds are. Some of the researchers were rather naïve about how environmental decision makers obtain their information, expecting that publication in peer-reviewed journals could be an effective way of communicating research results to decision makers!

Going back to my original Pannell Discussion on this, I see that I identified all of the key factors except one – the most important one! I hadn’t realised that simple lack of awareness was the biggest barrier.

During the interviews, we found that when we explained the idea of non-market valuation to the decision makers, many of them were quite positive about it. Whether they’d actually use it in practice if they knew more about it is an open question, but can’t be ruled out.

Further reading

Rogers, A.A., Kragt, M.E., Gibson, F.L., Burton, M.P., Petersen, E.H., and Pannell, D.J. (2013). Non-market valuation: usage and impacts in environmental policy and management in Australia,  Australian Journal of Agricultural and Resource Economics (forthcoming). Journal web page ♦ Pre-publication version at IDEAS


  • John Antle
    13 November, 2013 - 12:47 am | link

    David, an excellent job of addressing this very important issue — an issue that is very germane to all of science, not just environmental economics. Yes, the research community needs to understand this and do a much better job of communicating science to the “outside world.”
    But, what are the incentives for scientists to do that? Although we now often give lip service to good communication of research, the reality is that most research institutions still are poor at rewarding good communication of research results (in most universities you get tenure for peer-reviewed pubs, not for communicating research results to policy people). Moreover, mis-guided attempts to push the system in this direction may backfire. For example, the CGIAR is now evaluating their programs based on “outcomes,” but in many cases real “outcomes” only happen very slowly over long periods of time. If we are not careful, we will encourage superficial efforts to show “outcomes,” and risk de-funding good research that can’t demonstrate “outcomes,” rather than creating a system that rewards effective communication of research results.

    • 13 November, 2013 - 6:04 am | link

      Thanks John. You are right to highlight the incentives, or lack thereof. Indeed, the incentives can tend to absolutely discourage researchers from conducting useful applied research that would deliver real-world outcomes. For example, over the past few years, research in Australian universities has been evaluated under the “Excellence for Research in Australia” (ERA) system, and universities have become very focused on making sure that each of their disciplines gets a high rating in these evaluations. However, the research that is considered to be high quality is predominantly the most academic, esoteric and un-useful research. Worse than that, because of the ridiculous way the system works, even if you publish in the supposedly “high quality” journals, publishing additional work in the “lower quality” applied journals as well reduces your overall rating. Believe it of not, a department that publishes 50 “high quality” papers and 10 “low quality” papers would get a higher rating than a department that publishes 100 “high quality” and 50 “low quality” because they look at the ratio. It’s absolutely nuts.

      There have always been university academics who set out to try to do useful research and get it used in the real world, but they are a small minority, motivated by their own values and ideals, rather than being incentivised by the academic reward system.

      There has been talk of the Australian system being modified or supplemented to assess real-world impact. This has already happened in the UK, where real-world impact counts for 20% of their assessment in the Research Excellence Framework. They evaluate it on the basis of cherry-picked case studies, and a description of the strategies and level of effort devoted to pursuing real-world impacts. Presumably, the latter is partly in recognition of the long time frames involved in generating impacts in most cases, as well as the unavoidably hit-and-miss nature of success in these efforts. I quite like the UK system. I think it is a step in the right direction, and will start to change incentives, at least a bit.

  • Lili Pechey
    20 November, 2013 - 4:28 am | link

    Hi Dave,

    Thanks for your article. It occurred to me that there is possibly a forgotten transmission method to help bridge the gulf between academic research and decision-makers: consultants.

    I am working in London as a consultant at the moment and I am greatly heartened by the number of projects for government departments (especially Defra) that commence with a literature review so that there is a strong evidence base for any conclusions or recommendations that may follow.

    In addition, there is Water Appraisal Guidelines to support EIA, which includes a database of appropriate environmental valuation research. The Transport Appraisal Guidelines are also considering the explicit inclusion of valuation research.

    On this basis, I feel optimistic for the future inclusion of environmental valuation by decision makers…for now.

    • 20 November, 2013 - 11:53 am | link

      Hi Lili. We did include consultants in our survey, and the picture in Australia and New Zealand wasn’t so rosy, but I’m aware that it is more positive in the UK.

  • Mal
    26 November, 2013 - 8:15 am | link

    Hi David. A very welcome study and the findings appear to match my own experience at both Commonwealth and State levels. I would suggest that, of the reasons identified in the study, the predominant reason for NMV studies not having a greater influence lies with researchers’ ignorance of the decision-making process – and in particular, how policy people process information.

    Policy people are not really interested in having access to the most robust and rigorous data to support their positions (notwithstanding all the discussion about evidence-based policy). They are more interested in information that allows them to ‘sell’ their policy vision in a coherent and influential fashion. In this regard, I find researchers are often incredibly reticent to put a firm monetary figure on their work – probably because they have been trained to only provide ‘perfect’ information and have had ‘sloppy analysis’ punished in the past in academic environments. So when a policy person asks for a monetary figure, too often the response is a pageful of caveats followed by some figure (and in some cases a refusal to provide a figure). The policy person makes the inevitable conclusion that the researchers simply don’t know so never asks for it again.

    The other point to bear in mind is that there are different kinds of policy makers. I wonder if you found any obvious differences between departments (central vs line depts)? In my experience, Treasury dept economists are decision-makers who are quite willing to accept NMV. On the other hand, non-Treasury decision makers will happily accept NMV, but only where it makes sense. The biggest barrier I’ve found with non-economist decision-makers is in attempting to explain how the NMV value can be (say) 20 times the State budget – which in their minds equates to “this is so large it can’t be real”. But properly explained and they start to re-think. And none of them particularly cares what techniques are used or not, so long as it comes up with information they can use to further their policy needs. They don’t care that it is a black box, they rely on the fact that an economic expert told them so.

    • 26 November, 2013 - 8:26 am | link

      Thanks Mal. We didn’t have the data to detect a difference between central and line departments. I commented a little bit about researchers’ ignorance of the policy process in the post, and we cover it some more in the paper. I wouldn’t agree that it’s the predominant reason, but it’s certainly a factor.

  • Leave a Reply to Lili Pechey Cancel reply

    Your email address will not be published. Required fields are marked *

    Please solve this to show you\'re a real person *