Category Archives: Environment

327 – Heterogeneity of farmers

Farmers are highly heterogeneous. Even farmers growing the same crops in the same region are highly variable. This is often not well recognised by policy makers, researchers or extension agents.

The variation between farmers occurs on many dimensions. A random sample of farmers will have quite different soils, rainfall, machinery, access to water for irrigation, wealth, access to credit, farm area, social networks, intelligence, education, skills, family size, non-family labour, history of farm management choices, preferences for various outcomes, and so on, and so on. There is variation amongst the farmers themselves (after all, they are human), their farms, and the farming context.

This variation has consequences. For example, it means that different farmers given the same information, the same technology choices, or facing the same government policy, can easily respond quite differently, and they often do.

Discussions about farmers often seem to be based on an assumption that farmers are a fairly uniform group, with similar attitudes, similar costs and similar profits from the same practices. For example, it is common to read discussions of costs and benefits of adopting a new farming practice, as if the costs and the benefits are the same across all farmers. In my view, understanding the heterogeneity of farm economics is just as important as understanding the average.

Understanding the heterogeneity helps you have realistic expectations about how many farmers are likely to respond in particular ways to information, technologies or policies. Or about how the cost of a policy program would vary depending on the target outcomes of the program.

We explore some of these issues in a paper recently published in Agricultural Systems (Van Grieken et al. 2019). It looks at the heterogeneity of 400 sugarcane farmers in an area of the wet tropics in Queensland (the Tully–Murray catchment). These farms are a focus of policy because nutrients and sediment sourced from them are likely to be affecting the Great Barrier Reef. “Within the vicinity of the Tully-Murray flood plume there are 37 coral reefs and 13 seagrass meadows”.

Our findings include the following.

  • Different farmers are likely to respond differently to incentive payments provided by government to encourage uptake of practices that would reduce losses of nutrients and sediment.
  • Specific information about this can help governments target their policy to particular farmers, and result in the program being more cost-effective.
  • As the target level of pollution abatement increases, the cost of achieving that target would not increase linearly. Rather, the cost would increase exponentially, reflecting that a minority of farmers have particularly high costs of abatement. This is actually the result that economists would generally expect (see PD182).

Further reading

Van Grieken, M., Webster, A., Whitten, S., Poggio, M., Roebeling, P., Bohnet, I. and Pannell, D. (2019). Adoption of agricultural management for Great Barrier Reef water quality improvement in heterogeneous farming communities, Agricultural Systems 170, 1-8. Journal web page * IDEAS page

325 – Ranking projects based on cost-effectiveness

Where organisations are unable or unwilling to quantify project benefits in monetary or monetary-equivalent terms, a common approach is to rank potential projects on the basis of cost-effectiveness. Just like ranking projects based on Benefit: Cost Ratio (BCR), this approach works in some cases but not others.

To rank projects based on cost-effectiveness, you choose the metric you will use to measure project benefits, estimate that metric for each project, estimate the cost of each project, and divide the benefit metric by the cost. You end up with a cost-effectiveness number for each potential project, and you use these numbers to rank the projects.

An advantage of this approach is that it sidesteps the challenges of having to measure all the benefits in monetary or monetary-equivalent terms, which is what you have to do calculate a BCR. A disadvantage is that it only works to compare projects that generate similar types of benefits, which can all be measured with the same metric.

Assuming that we are satisfied with your benefits metric and that the projects to be ranked are similar enough, the question is, in what circumstances is it appropriate to rank projects based on cost-effectiveness? (Assuming that the objective is to maximise the overall benefits across all the projects that get funded.) It is logical to ask this given that cost-effectiveness is closely related to the BCR (it has the same structure – it’s just that benefits are measured differently), and we’ve seen in PD322, PD323 and PD324 that ranking projects by BCR works in some situations but not others.

It turns out that the circumstances where it is logical to use cost-effectiveness to rank projects are equivalent to the circumstances where it is logical to rank projects using BCR.

(i) If you are ranking separate, unrelated projects, doing so on the basis of cost-effectiveness is appropriate. Ranking projects by cost-effectiveness implies that there is a limited budget available and you are aiming to allocate it to the best projects.

(ii) If you are ranking mutually exclusive projects (e.g. different versions of the same project), ranking on the basis of cost-effectiveness can be highly misleading. If there are increasing marginal costs and/or decreasing marginal benefits (which are normal), ranking by cost-effectiveness will bias you towards smaller project versions. In PD323, I said to rank such projects by NPV and choose the highest NPV you can afford with the available budget. If we are not monetising the benefits, there is no equivalent to the NPV — you cannot subtract the costs from a non-monetary version of the benefits. This means that, strictly speaking, you cannot rank projects in this situation (mutually exclusive projects) without monetising the benefits. If you absolutely will not or cannot monetise the benefits, what I suggest you do instead is identify the set of project versions that can be afforded with the available budget, and choose the project version from that set that has the highest value for the benefit metric. (Theoretically it should be the project version with the greatest net benefit (benefits – costs) but that is not an option here because in Cost-Effectiveness Analysis the benefits and costs are measured in different units.)

You don’t divide by the costs, but you do use the costs to determine which project versions you can afford. This is a fudge that only makes sense if you adopt the unrealistic assumption that any unspent money will not be available to spend on anything else, but it seems to me to be the best way to go, if monetising the benefits is not an option.

(iii) If you are ranking separate, unrelated projects, and there are multiple versions available for at least one of those projects, then cost-effectiveness does not work and the rule about choosing the highest-value benefit metric does not work either. Instead, you should build an integer programming model to simultaneously weigh up both problems: which project(s) and which project version(s). There is a brief video showing you how to do this in Excel in PD324. In the video, the benefits are measured in monetary terms, but the approach will work if you use non-monetary measures of the benefits.

There are a number of tools available for ranking projects based on cost-effectiveness (e.g. Joseph et al. 2009) but it is important to be clear that the approach only works in certain cases.

Even if you are using cost-effectiveness in the right circumstances (case (i) above), it has a couple of limitations relative to using BCR. One is that you cannot use it to rank projects with distinctly different types of benefits that cannot all be measured with the same metric. Another limitation is that cost-effectiveness provides no evidence about whether any of the projects would generate sufficient benefits to outweigh its costs.

Further reading

Joseph, L.N., Maloney, R.F. and Possingham, H.P. (2009). Optimal allocation of resources among threatened species: a project prioritization protocol. Conservation Biology, 23, 328-338.  Journal web site

Pannell, D.J. (2015). Ranking environmental projects revisited. Pannell Discussions 281. Here * IDEAS page

319 – Reducing water pollution from agricultural fertilizers

I gave a talk to the Ontario Ministry of Agriculture, Food and Rural Affairs (OMAFRA) on July 16, 2019, exploring ways to reduce water pollution from agricultural fertilizers.

Many methods have been proposed to reduce water pollution from agricultural fertilizers. The list includes use of nitrification inhibitors, land retirement, vegetation buffer strips along waterways, flood-plain restoration, constructed wetlands, bioreactors, cover crops, zero till and getting farmers to reduce their fertilizer application rates.

Last year, while I was at the University of Minnesota for several months, I reviewed the literature on these options and came to the conclusion that the option with the best prospects for success is reducing fertilizer application rates. It’s the only one of these options that is likely to be both effective and cheap.

In my talk, I made the case for agencies who are trying to reduce pollution to focus on reducing fertilizer rates.

In brief, I identified three key reasons why there are untapped opportunities to reduce fertilizer rates.

1. Some farmers apply more fertilizer than is in their own best interests. Surveys in the US suggest that something like 20 to 30% of American farmers could make more profit if they reduced their rates. If it was possible to identify these farmers and convince them of this, it would be a rare win-win for farmers and the environment.

2. Even those farmers who currently apply fertilizer close to the rates that would maximize their profits could cut their rates without sacrificing much profit. Within the region of the economically optimal rate, the relationship between fertilizer rate and profit is remarkably flat. New estimates by Yaun Chai (University of Minnesota) of this relationship for corn after corn in Iowa indicate that farmers could cut their rates by 30% below the profit-maximizing rate and only lose 5% of their profits from that crop. For corn after soybeans, the equivalent opportunity is for a 45% cut!

3. Some farmers believe that applying an extra-high rate of fertilizer provides them with a level of insurance. They think it reduces their risk of getting a low yield. However, the empirical evidence indicates exactly the opposite. When you weigh up the chances of an above-average yield and a below-average yield, higher fertilizer rates are actually more risky than lower rates. In addition, price risk interacts with yield risk to further increase the riskiness of high rates.

I think there is a real opportunity to explore these three factors in more depth and try to come up with policy approaches that could deliver reduced fertilizer usage in a highly cost-effective way. Some of it would just be about effective communication (e.g. the design of “nudges”, as popularised in behavioural economics) while some might require a modest financial commitment from government or industry. One idea is to offer something like a money-back guarantee to those farmers who agree to reduce their rates by a specified amount. If they lose money as a result, they get compensation. Because of the flatness of the fertilizer-profit relationship, the payments required would usually be very small.

I recorded the presentation to OMAFRA, and it’s available here.

Further reading

Pannell, D.J. (2006). Flat-earth economics: The far-reaching consequences of flat payoff functions in economic decision making, Review of Agricultural Economics 28(4), 553-566. Journal web page * Prepublication version here (44K). * IDEAS page

Pannell, D.J. (2017). Economic perspectives on nitrogen in farming systems: managing trade-offs between production, risk and the environment, Soil Research 55, 473-478. Journal web page

318 – Measuring impacts from environmental research

There have been some studies considering the relationship between research and environmental policy but studies capturing the impact of research on environmental management, environmental policy, and environmental outcomes are relatively rare. Here is one attempt.

Environmental research may generate benefits in a variety of ways including by providing: information or technology that allows improved management of an environmental issue; information that fosters improved decision-making about priorities for environmental management or policy; or information about an environmental issue that is of intrinsic interest to the community. There are several reasons why it can be worth measuring the impacts of environmental research, including making a case for the funding of environmental research, informing decisions about research priorities, and helping researchers to make decisions about their research that increase its ultimate benefits.

Earlier this year we released the results of an assessment of the engagement and impacts of a particular environmental research centre, the ARC Centre of Excellence for Environmental Decisions (CEED). The assessment includes impacts on policy, management and the community, as well as measures of academic performance, including publications, citations and collaborations. Data were collected in several ways: a survey of all project leaders for the Centre’s 87 projects, the preparation of detailed case studies for selected projects, and collection of statistics on publications, citations and collaborations.

The approach taken was informed by a recent paper of ours called “Policy-oriented environmental research: What is it worth?” (Pannell et al. 2018). The full report is available here.

The Centre’s engagement with end users and stakeholders was strong in Australia and around the world. Researchers reported many examples of engagement with research users involved in policy and management. Results were highly heterogeneous and somewhat skewed, with the majority of observed impact occurring in a minority of the projects.

For almost half of the projects, the potential future increase in impact was assessed as being moderate or high. To some extent, this reflects the time lags involved in research attempting to influence policy and management, but the information was also used to identify projects for which additional engagement effort could be beneficial. The correlation between impact and academic performance was positive but low.

To obtain richer detail about impacts, detailed case studies were prepared for nine research projects. The projects were selected to be diverse, rather than representative. These case studies highlight the unique circumstances faced by each project in endeavouring to have an impact. Each project must be framed within a strong understanding its domain and be deeply engaged with research users if impact is to occur. Substantial benefits for policy or management are apparent in a number of the case studies.

A factor contributing greatly to the impact of CEED was the research communication magazine Decision Point. This publication was widely accepted as a valued communication resource for academic findings in the field of environmental decision sciences, and was rated by people in government and academic institutions as relevant and informative.

Some valuable lessons and implications of the impact analysis are identified in the report. Research impact does not depend only on good relationships, engagement and communication, but also importantly on what research is done. Therefore, embedding a research culture that values impact and considers how it may be achieved before the selection of research projects is potentially important. The role of the Centre leadership team in this is critical. Embedding impact into the culture of a centre likely occurs more effectively if expertise in project evaluation is available internally, either through training or appointments.

A challenge in conducting this analysis was obtaining information related to engagement and impact. There may be merit in institutionalising the collection of impact-related data from early in the life of a new research centre.

Interestingly, we found little relationship between (a) impact from translation and engagement and (b) measures of academic merit. It should not be presumed that the most impactful projects will be those of greatest academic performance.

At the time of the assessment, CEED had generated 848 publications which had been cited 14,996 times according to the Web of Science. CEED publications are disproportionately among the most cited papers in their disciplines. More than a quarter of CEED publications are in the top 10% of the literature, based on their citations. For 39 CEED publications (one in 22), their citations place them in the top 1% of their academic fields in the past 10 years.

There are often long lags between the start of research and delivering the impact — decades in many cases. Therefore, there is a need to allow the longest possible time lag when assessing research impact. On shorter timescales, it may be possible to detect engagement, but not the full impact that will eventually result.

Further reading

Pannell, D.J., Alston, J.M., Jeffrey, S., Buckley, Y.M., Vesk, P., Rhode, J.R., McDonald-Madden, E., Nally, S., Gouche, G. and Thamo, T. (2018). Policy-oriented environmental research: What is it worth? Environmental Science and Policy 86, 64-71. Journal web page

Thamo, T., Harold, T., Polyakov, M. and Pannell, D. (2018). Assessment of Engagement and Impact for the ARC Centre of Excellence for Environmental Decisions, CEED, University of Queensland. http://ceed.edu.au/resources/impact-report.html

317 – The worth of wildlife

What is a threatened species worth? It may seem like a strange question, but it’s one that environmental economists have done a fair bit of research on.

If you measured their worth in commercial terms, the answer would be, probably nothing in most cases. But most of us care about threatened species and would be willing to pay something to prevent them from going extinct. There have been many studies conducted by environmental economists to estimate just how much people are willing to pay to protect particular threatened species. PhD student Vandana Subroy is lead author on a new study in the journal Ecological Economics where we conducted a “meta-analysis” – a review of 109 willingness-to-pay estimates from 47 studies around the world.

We found that the average willingness to pay to protect a species was US$414 per household (once off, not per year). Over a large population, this adds up to very large budgets being justified – vastly larger than the current budget for threatened species recovery in Australia.

Of course, the range across different species in different studies in different countries was enormous: as low as US$1 per household and as high as US$4,400.

Photo: J.J. Harrison (CC BY-SA 3.0)

Not surprisingly, people’s willingness to pay was much higher for “charismatic” species. Determining which species are “charismatic” is clearly subjective, but it’s safe to say they are typically large vertebrates that instinctively appeal to humans (e.g., elephants, pandas, and whales). In our study, species were treated as charismatic if they had been characterized as such in the original study, or elsewhere. The average willingness to pay was US$572 for charismatic species compared with US$106 for non-charismatic species.

Surprisingly the difference in willingness to pay between developed and developing countries was small, and not statistically significant.

One of the most surprising things we learned from doing the study was just how poorly done many of the studies were. In many cases, it was not at all clear in the question that was asked of survey respondents what was being valued. An amazing number of surveys were not clear about the base case – e.g. if there was no new intervention, what would be the probability of extinction of the species? Without that, you can’t give a meaningful willingness-to-pay response. Many surveys asked vaguely about “protecting” the species, but without saying what it was being protected from, or how protected it would be. Because of these and other weaknesses, we had to leave a lot of studies out of the meta-analysis. I commented to my colleagues that I wanted to cancel the economics degrees of the people who did these studies (assuming they had economics degrees).

If you’re interested, the paper can be downloaded for free even without a subscription, until August 30, 2019: here.

Further reading

Subroy, V., Gunawardena, A., Polyakov, M., Pandit, R. and Pannell, D.J. (2019). The worth of wildlife: A meta-analysis of global non-market values of threatened species, Ecological Economics (forthcoming). Journal web page