Yearly Archives: 2011

196 – The web vs journals

After last week’s PD, describing the difficulties we had publishing a journal paper about INFFER (Investment Framework for Environmental Resources), a reader emailed to ask, why bother? Why not just put a manual or report about INFFER on the web? This prompted me to think about the relationship between putting information on a personal web site and publishing in a peer-reviewed research journal. They are very different beasts.

As it happens, as well as having a new journal paper (Pannell et al., 2011), we also have a beautiful new web site for the INFFER project: www.inffer.org. The old version was looking a bit amateurish and outdated, but the new one looks professional and contemporary, and should be more intuitive for users.

The web site already contains documentation about INFFER at various levels of detail: from brief introductions to comprehensive user manuals. There is information about existing users and 30 pages of Frequently Asked Questions.

So, given that all this is available on the web site, why would we go to so much bother to publish an article in a peer-reviewed journal? There are several reasons.

Firstly, journal publication provides a degree of quality assurance that may help to increase our credibility with potential users. The rigour of the analysis is assessed by expert, anonymous, independent reviewers, so having made it to publication should help to convince people that the work has substance. Of course, the peer review process is neither perfect nor exhaustive, but it should increase people’s confidence to some degree.

A second reason is to enhance one’s own critical assessment of the work. The act of writing a paper that you know is going to be peer reviewed is rather different from other writing. It sharpens the mind to know that anonymous reviewers will have the freedom to criticise your work. One thinks more clearly about potential limitations of the work, in anticipation of what reviewers might say.

Thirdly, you hope to reach an additional group of readers. In truth, the audience for an active web site is vastly greater than the audience for almost all journals, but there may still be different people, and different types of people, who access the journal version.

Finally, for a professional researcher, journal publication is probably the most important measure of performance and success. My university is happy that that I use the web like this to communicate about issues related to my research, but when it comes to evaluating my research, anything that I just put on the web counts for nothing. That’s fair enough, given how easy it is for anyone to put any old rubbish on the web.

One might ask the converse question: if research has been published in a peer-reviewed journal, why also include it on a web site? There are good reasons to do this as well.

  1. To increase the number of readers. As a generalisation, readership of web pages dwarfs the readership of journal articles. Here’s an extreme example. I don’t know how many people would have read Pannell (1997) if I had not put a pre-publication version of it on the web, but my guess would be, a few hundred. That number of readers is tiny compared to the number of people who download the paper from my web site each year after finding it in Google. Over the past 12 months, it has been downloaded 17,500 times! I’m sure they don’t all read it, but plenty do.
  2. To promote journal articles. Some of those extra readers may follow up and read the journal article version of the work, if you give them the details. I’m confident that a web presence also helps to increase the extent to which certain papers are cited by other research authors. Pannell (1997) is a good example.
  3. Speed of publication. I can “publish” something on the web a few minutes after I finish writing it. Journal publishing speeds have increased since they started using web-based systems to manage the review process, but a six-to-twelve month lag between submission and publication is still normal.
  4. Detail. On the web, I can publish something that is much less detailed than a journal article if I want, increasing its accessibility to many readers. I can also provide much more detailed documentation for models and frameworks than one could ever get published in a journal. I can also publish something of the nature of a user manual, which you’d never get into a journal.
  5. Ease of reading. When I write journal articles, I try to make them readable, but there is only so far you can go in that direction in a journal context. On the web, there is no such constraint.
  6. Updateable. Once published, a journal article is forever. The most you can do to update it is publish an erratum, if necessary, or maybe write a follow up article some years later. By contrast, I can update something I have put online as often as necessary.

In summary, I publish in journals for quality assurance and professional recognition, and I publish on-line to increase the size of the audience. It makes sense to do both.

David Pannell, The University of Western Australia

Further reading

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16: 139-152. Pre-publication version of full paper (100 K)

Pannell, D.J., Roberts, A.M., Park, G., Alexander, J., Curatolo, A. and Marsh, S. (2011). Integrated assessment of public investment in land-use change to protect environmental assets in Australia, Land Use Policy 29(2): 377-387. doi:10.1016/j.landusepol.2011.08.002. Journal web site here

29(2): 377-387

195 – Publishing persistence

Sometimes getting a research article published in a peer-reviewed journal requires a high degree of persistence. I’ve found this to be particularly the case for some papers that are a bit outside the mainstream.

When trying to publish a research paper, the best one can usually hope for is for the editor to require changes to the paper in response to reviewer comments, which one duly makes, and, after some further too-ing and fro-ing, the paper gets accepted. The reviewers’ comments might be anything from insightful and helpful to annoying and irrelevant, but whatever they are, we aim to go through this process.

But some papers are much harder to get accepted. I’ve just had this experience with the paper that describes the work we’ve done developing and applying INFFER (the Investment Framework for Environmental Resources). This is a really important paper to me – one I’ve got a lot more staked on than most. I think its the most important and valuable work I’ve ever been involved in.

But the paper is a bit unusual. It doesn’t quite fit the mould of a standard environmental economics or policy paper. We set out to describe the framework in detail, in much the same way as computer models are often described in journals like Agricultural Systems. But this proved to conflict with the expectations of reviewers in the journals to which we submitted the paper.

We first submitted it to Land Use Policy. The first comment of reviewer 1, in response to the question, “Does the paper represent a contribution to knowledge?”, was “Not particularly”! Not surprisingly, the paper was rejected.

So we tried a different journal: Environmental Science and Policy. The editor there didn’t even want to send the paper out to reviewers. He was interested in the topic, but not in a paper of this style.

Next we went to Journal of Environmental Management. Again the reviewers didn’t like it. Reviewer 2 started by saying “I am not sure how to review this paper” and later said “this paper doesn’t really belong in the scientific literature”.

By now, we were getting the message! There was nothing in any of the reviews saying that there were problems with INFFER, but the style of the paper was clearly a problem.

So we re-wrote it from scratch, putting more emphasis on the results of applying the framework, while still including enough description of INFFER for the paper to serve as its main reference. We sent this new paper to Land Use Policy, which is where we wanted to publish INFFER in the first place.

I was happy and relieved last month when, after some positive reviews and minor revisions, the new version of the paper (Pannell et al., 2011) was accepted. It is now available on the journal’s web site here. If you’d like a copy of the full paper, email me.

This all shows that, when it comes to publishing, persistence is crucial (Pannell, 2002), but obstinacy is foolish. You have to be prepared to adapt a paper to suit the preferences of reviewers and editors. In this case, I’m even willing to concede that the modified paper is better than the one we started with.

We started writing the original paper in November 2008, so it took almost three years from go to whoa. That’s plenty long enough, but it’s only about half as long as one of my other papers: Pannell et al. (2000). See Pannell (2002) for the full, sorry story of that one. Pannell et al. (2000) ended up being widely read and highly cited. Hopefully the new paper will turn the tables on reviewer negativity in a similar way.

David Pannell, The University of Western Australia

Further reading

Pannell, D.J. (2002). Prose, psychopaths and persistence: Personal perspectives on publishing, Canadian Journal of Agricultural Economics, 50(2): 101-116. Pre-publication version here (66K). Final published paper at journal web site here.

Pannell, D.J., Malcolm, L.R. and Kingwell, R.S. (2000). Are we risking too much? Perspectives on risk in farm modelling. Agricultural Economics 23(1): 69-78. Full paper (65 K)

Pannell, D.J., Roberts, A.M., Park, G., Alexander, J., Curatolo, A. and Marsh, S. (2011). Integrated assessment of public investment in land-use change to protect environmental assets in Australia, Land Use Policy 29: 377-387. Journal web site here

194 – “With respect, …”

I struck a particularly prickly questioner after a conference presentation I gave in Sydney two weeks ago. The questioner clearly had a very jaundiced view of my work, and, I suspect, of economics in general. Responding to such a person is challenging, but can actually be quite helpful sometimes, as it brings issues to the surface that otherwise stay buried. I use it here as an opportunity to outline an approach we’ve developed to make economics relevant, accessible and useful in supporting decision makers.

When you give a conference presentation and, in question time, someone starts their question with, “With respect, …”, you know you’re in trouble. “With respect, …” generally means “I have a deeply held and fundamental disagreement with what you’ve just said”, and that was exactly the case in this instance.

The context was a national conference in Sydney organised by the Bushfire Cooperative Research Centre and the Australasian Fire and Emergency Service Authorities Council. My talk was about a project we are just starting, doing integrated assessment of various strategies for prescribed burning to reduce risks to life and property.

My questioner seemed to have (at least) three big problems with my presentation.

  1. She objected to the idea that we could express important values and ideas in numbers. Or perhaps it was something like, “you can’t express everything that matters in numbers”.
  2. Probably related to point 1, she suspected that we would exclude important issues from the analysis.
  3. She seemed to feel that we would be taking control of the agenda and the decision process, imposing obnoxious economist values and economist judgments. To be fair, she didn’t actually say “obnoxious”, but it seemed to be the subtext.

The approach we’ll be using in the project is based on 10 years experience of trying to help policy makers and managers make better decisions about complex environmental problems. The aim is not to turn people into economists, but to help them make decisions that are more in line with the goals of the programs they are running or delivering. The approach is to ask, how can we achieve the goal or goals to the highest possible level with the available resource?. In the case of bushfires, there are multiple goals that may need to be traded off (e.g. protecting life, property, water resources, the environment). We will be teasing out the highest combinations of these outcomes that can be achieved, and quantifying the trade-offs between them.

The approach involves working closely with policy makers and managers to understand the decision options that they think are potentially feasible, and the values associated with the outcomes that will be delivered. We use the available research and judgments from researchers and local stakeholders to identify what those outcomes would be, and to estimate the likely behavioural responses of people and organisations affected by the project. We bring all that together in a quantitative tool that allows us (and any stakeholder) to ask what-if questions about different decision options.

Having clarified that, will the project line up with our questioner’s low expectations?

Starting with number 3, we certainly will not be taking control of the agenda or the decision process. The aim is to expand and clarify the options that are available, and to assess their consequences. The broad goals against which the options will be assessed are those of the funders and decision makers, not ours. Our job, in part, is to be advocates for the public interest, assessing whether particular strategies or interventions will contribute outcomes sufficient to justify the resources they use up, given that those resources could be used in other projects to pursue the same goals, or different goals.

We appreciate that the decisions are not up to us. We provide information, and others make the decisions. They are likely to include considerations beyond our analysis (e.g. political consequences). For that reason, we tend not to say which options are the absolute best, although we do indicate that some options are clearly not worth doing, usually because they are not technically or socially feasible given the budget they are likely to have available.

It’s true that judgments will be required at times, but even there we use other experts to make them if possible. For example, there is often a lack of clear evidence to support a particular number to feed into the analysis. We collect the available evidence and ask relevant experts to help us by selecting a best-bet value, preferably by consensus. If consensus is not possible (e.g. there are differing scientific opinions on the matter), we include different values, and tease out their consequences, to see if the disagreement actually makes any important difference. Sometimes it does, sometimes it doesn’t.

Sometimes, on particular issues, the best expert might be one of our team — that’s fine. For example, that might be the case in relation to farmer adoption of new practices, or the farm-level economics of a land management option.

Although the approach is grounded in economics, we are not just economists. We also have expertise in biological, physical and social sciences, and we collaborate closely with experts from all disciplines. In 2009, we won the Eureka Prize for Excellence in Interdisciplinary Research.

Back to point 2 of the questioner’s list of concerns. Obviously it’s true that an analysis might not explicitly factor in every conceivable relevant consideration. However, if the decision makers and stakeholders we work with think something is important and ask us to factor it in, we’ll try to do so. If something is excluded, it is because (a) the the decision makers and stakeholders haven’t thought it important enough to include, or (b) there is too little information or understanding about the issue to warrant its inclusion.

The exception there is politics, which we don’t include. We’re interested in promoting real outcomes for the community, not political outcomes. If a decision maker promotes an option for political reasons, we want it to be clear what the resulting cost is in terms of lost community outcomes.

Finally, there is our questioner’s aversion to numbers. Where appropriate, we do actually capture an issue qualitatively, rather than quantitatively, and report something to the decision makers in words, rather than numbers. One of a number of possible examples here is the quality of the information used in the analysis. It’s important for decision makers to know whether the analysis of a decision option is based on really strong evidence, or expert judgments based on minimal evidence. We capture and report that qualitatively, not in numbers.

But where it is possible, and makes sense, doing a numerical analysis of the pros and cons of each decision option is crucial. Our experience provides abundant examples that reinforce this. It is not sufficient to describe all of the benefits and costs qualitatively, because every decision option has both benefits and costs. For example, a purely qualitative approach can’t tell you that, for option 3, the ratio of benefits to costs is 10 times greater than for option 7, which is essential information for sound decision making.

I should acknowledge that our approach to this sort of research is actually relatively unusual for economists. Experience over a decade has helped us to develop an approach that makes economics accessible and helpful to designers, managers and deliverers of programs. It is pragmatic and participatory, but still firmly based on sound economic theories and principles. We make sure that the biological, physical and social science we use is not compromised to suit over-simplified economic models, which, sadly, happens too often in the mainstream economics literature.

This is not to say ours is the only relevant approach to economic analysis, even for these sorts of problems. It addresses a sub-set of the possible economic questions that are relevant to a particular group of decision makers. By demonstrating the value and usefulness of economic thinking and analysis, it may actually provide a vehicle for introducing more complex economic models to these decision makers.

David Pannell, The University of Western Australia

Further reading

Pannell, D.J. and Roberts, A.M. (2009). Conducting and delivering integrated research to influence land-use policy: salinity policy in Australia, Environmental Science and Policy 12(8): 1088-1099. Full paper (94K)Journal version on-line

Pannell, D.J. (2009). Why don’t environmental managers use decision theory? Pannell Discussions, no. 150, http://dpannell.fnas.uwa.edu.au/pd/pd0150.htm.

193 – Transaction costs workshop

There was a bit of interest in last week’s Pannell Discussion about transaction costs. This one is an advertisement for a national workshop on the topic, as it relates to environmental and natural resource policy.

For those of you who were interested in last week’s topic, you may like to know about a national workshop that will be held in Perth in February, prior to the annual conference of the Australian Agricultural and Resource Economics Society.

Title

Transaction Costs in Environmental and Natural Resource Policy

Organisers

Professor David Pannell, University of Western Australia
Assoc. Professor Laura McCann, University of Missouri
Dr. Dustin Garrick, University of Oxford

Venue – date

Esplanade Hotel, Fremantle, Western Australia – 7 February 2012

Full details of topics and speakers are available at the AARES web site here, or download a flier (pdf file) here. We’re keen to get a diverse audience at the workshop, including researchers, government officers and people involved in program delivery.

Registrations for the event is now open via the AARES web site.

David Pannell, The University of Western Australia

192 – Transaction costs

If I buy something, I have to pay the asking price, but I may also incur a range of extra costs. These might include things like time, stress and travel costs involved in making the purchase. Economists call these extra costs ‘transaction costs’. There are also transaction costs involved in establishing, running or participating in a government program. I’ve become very interested in how transaction costs affect environmental programs.

I’m visiting China in October, so this week I applied for a visa. When I pick it up next week, it will cost me $30. However, that’s not the only cost I will have borne to get it. They have a new rule that you can’t apply by mail; you have to make a personal visit to the consulate. So far I have had to:

  • complete the application form, which was not straightforward, requiring me to make two queries to the people who are organising the visit;
  • look up the location of the Chinese consulate in Perth;
  • drive about 10 km to where I thought (mistakenly) it was, involving costs of fuel, vehicle wear and tear, and time;
  • pay for parking;
  • spend time walking around the area looking for it, unsuccessfully;
  • look at the street directory again and realise I was about a kilometre from the right place;
  • drive to the right place;
  • pay for parking again;
  • walk to the consulate;
  • wait in a slow-moving queue for about half an hour; and
  • drive home (more petrol, depreciation and time).

When I go to pick it up, I’ll need to invest more time, fuel and vehicle wear and tear. By the time I get the visa, the $30 cash cost will be pretty minor compared to the rest of the costs.

Economists call these other costs ‘transaction costs’. They are costs, using the term broadly, involved in undertaking a transaction, other than the direct financial cost of the transaction itself. They may include costs associated with thinking, analysing, negotiating, monitoring, enforcing, administering, learning, and so on.

In simple text-book economics, transaction costs aren’t accounted for, but in recent decades, economists have paid more attention to them. There have even been a couple of Nobel prizes awarded to people whose work included an emphasis on transaction costs.

I’m interested in transaction costs in environmental policy. I’ve been amazed at how big they can be. For example, under the National Action Plan for Salinity and Water Quality (Pannell and Roberts, 2010), the approximate allocation of Australian government funds to projects was as follows:

Category Budget ($ million)
On-ground works 220
Capacity building 260
R&D 44
Administration, planning, monitoring and evaluation 120

The last category is clearly solely transaction costs involved in getting the program delivered. They are large, but this number greatly understates the total transaction costs of the program. For one thing, the Australian Government took a large slice off the top for its own administration costs (to get the program established and run it), and that’s not included in the above figures. Also, the numbers in the first three categories include significant transaction costs involved in running those individual projects. As a rough guess, I estimate that the share of the Australian Government’s money in the program that was spent on transaction costs could have been about 40 per cent. That’s a lot of money not being spent on managing salinity.

Elsewhere in government, there were four reviews of the program during its life: two by the Australian National Audit Office, one by a committee of the House of Representatives and one by a Senate committee. Each of these involved substantial costs. And there were transaction costs prior to the program being established, as governments around Australia negotiated, discussed, and argued about the shape, the size and the rules of the program.

On top of that were the transaction costs borne by farmers and other organisations who were engaged with the program. They had to incur transaction costs in the course of negotiating with their partners and collaborators about involvement in projects, completing project application forms, completing reports to satisfy accountability requirements, meetings of various sorts, phone calls, and so on. Some of them would have incurred transaction costs from lobbying the government during the period when the program was being developed, or attempting to change aspects of the program once it was up and running.

These are even more invisible than the transaction costs incurred by government, and they are probably even more likely to be overlooked when a program is being designed or implemented.

For example, the first full round of competitive funding for the Caring for our Country program in Australia received about 1300 project applications, of which less than 10% were actually funded. These applications can be quite time consuming and difficult to prepare, but more than 1200 applicants must have felt like they had borne those transaction costs for no benefit. If this had been considered, I think the process would have been designed differently.

Focusing on the transaction costs in environmental programs could be beneficial for various purposes, including:

  • identifying cases where they seem excessive, guiding efforts to reduce transaction costs;
  • designing programs in a way that limits transaction costs to participants;
  • guiding better choices about policy mechanisms;
  • understanding why some policies achieve less than intended; and
  • understanding why people are unwilling to participate in programs in some cases.

Well-conducted studies of transaction costs in environmental programs should ultimately contribute to greater achievement of environmental outcomes from those programs, by encouraging greater participation and leaving more money to be spent on the problem.

David Pannell, The University of Western Australia

Further reading

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics, 54, 437-456. Journal web site here