One of the foundational concepts of economics is the idea of diminishing marginal benefits. In this post I argue that it applies to economics itself.
A classic example of diminishing marginal benefits is the application of fertilizer to a wheat crop. As you add more fertilizer, the wheat yield increases, but it does so at a diminishing rate. The yield curve gets flatter and flatter as the fertilizer rate increases.
This flattening off of benefits has been found to be extremely common as the level of an input to a production process increases (even those related to the environment – see PD183). In economic text books, it is assumed to be the default case. It underpins almost all of the field of production economics. It is built into the thinking of economists about anything to do with production.
Ironically, however, we possibly don’t often stop to think that it applies to our own discipline. If the X axis of the graph represents the level of resources and effort put into analysing an economic decision problem, and the Y axis represents the benefits generated from the resulting decision, we would expect to see just the same sort of shape to the graph.
To illustrate, imagine that we have a limited budget to spend on new projects and we are trying to select from a list of potential projects the ones that will deliver the best value for money – the greatest benefits per dollar of costs.
One option for us would be to choose to do no economic analysis at all and just use our judgement about which projects are best. The projects we chose would generate some benefits. The benefits may or may not be sufficient to outweigh the costs and, unless we are freakishly lucky or clever, the benefits would be less than we could make if we did do some economic analysis.
Next, suppose we do a simple ‘quick-and-dirty’ economic analysis of each of the potential projects and calculate a benefit: cost ratio for each. The analysis wouldn’t be very sophisticated, and the numbers we used would be somewhat uncertain (because we would not be devoting much effort to getting the best possible numbers) but, nevertheless, decision theory shows that the information produced will probably help us make significantly better decisions.
You can see where this is going. There are many ways that we could add accuracy, detail and sophistication to the analysis we do of each project. For example, we could:
- explicitly represent the risks and uncertainties involved
- represent the risk attitudes of the decision makers
- represent the benefits and costs over a series of years, rather than a single year
- include ‘option values’ representing the value of deferring a decision
- conduct sophisticated statistical analysis to estimate the relationships and parameters to include in the analysis
- if relevant, conduct non-market valuation studies to estimate the intangible values expected to result from the projects
and so on.
The more sophisticated, detailed and comprehensive we make the analysis, the greater would be the expected benefits from the resulting decisions, but evidence shows that the benefits increase at a diminishing rate.
One reason is that more sophisticated modelling and better data is expensive. We can do our quick-and-dirty modelling at very low cost and get some benefits, but doing a better analysis is likely to involve a lot more effort and cost. Thus, even if we are able to double the benefits, we would probably more than double the costs, resulting in some flattening out of the curve.
Another reason is that perfect decision making imposes an upper bound on how many benefits we can generate from the decision. Once our analysis is sophisticated enough to support good decision making, the additional benefits that can be delivered through better decision making are limited to the difference between good and perfect decision making. Usually, the level of sophistication required to get to reasonably good decision making is a long way short of the frontiers of economics research.
So, where should applied economists aiming to support real decision makers strike the balance between simplicity and sophistication? There is no easy answer to that. It depends on the importance of the decision problem, the quality of existing information that their analysis will be built on, the time frame for the decision, and so on. But the existence of diminishing marginal benefits from economics means that the optimal balance will be pushed to some degree towards the simple end of the spectrum, more so, no doubt, than people who like building complex economic models would prefer!
Just to be clear, I am not arguing that any simplistic analysis is good enough. In the field I work in, environmental economics, I am constantly seeing decisions for which the supporting analysis is clearly not nearly sufficient (Pannell and Roberts 2010). But improving the quality and sophistication of the analysis doesn’t mean that we have to go to the other extreme. I believe that the analysis needs to consider all of the key factors explicitly (PD159), and to do so in a logical and theoretically sound way (PD158), but the treatment of each of those key factors can be relatively simple.
This difficult balance is something we’ve considered carefully in our development of the Investment Framework for Environmental Resources (INFFER) (Pannell et al. 2012).
Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics54(4): 437-456. Journal web site here ♦ IDEAS page for this paper
Pannell, D.J., Roberts, A.M., Park, G., Alexander, J., Curatolo, A. and Marsh, S. (2012). Integrated assessment of public investment in land-use change to protect environmental assets in Australia, Land Use Policy 29(2): 377-387. Journal web site ♦ IDEAS page for this paper