Author Archives: David Pannell

329 – Best albums of the 2010s

Where did the decade go? Here are my 20 favourite albums over the past 10 years. 

In the previous two decades, my choice of best album was completely obvious and clear-cut: OK Computer in the 1990s and In Rainbows in the 2000s. This decade lacked such a standout album, so it wasn’t easy to settle on a ranking at the top. Although, for a change, Radiohead didn’t come in at number 1, they came close.

My top 10 is a testament to longevity, as half of the acts started performing in the 1960s (Bowie, Crimson, Beatles) or the 1970s (Wire, Costello). But there are also acts on the full list of 20 who emerged in each of the subsequent decades.

The criterion used for the ranking is how much I love these albums, so it’s totally subjective, and others would have completely different lists. The point is to raise awareness of some great music that might otherwise have escaped your notice. I know from some of the feedback I got for the equivalent list 10 years ago that it did achieve that for some people.

1. The Suburbs by Arcade Fire (2010). Arcade Fire’s third album was a fantastic surprise. Creative, but highly accessible and memorable pop/rock songs. Consistently wonderful throughout. An album that I was actually excited to play, and still love. And it surprised everybody even more by winning the Grammy Award for album of the year in 2011 (the competition was Lady Gaga, Eminem, Katy Perry and Lady Antebellum).

 

2. Change Become Us by Wire (2013). I am well aware that most readers of my blog will never have heard of the band Wire, and that’s a real shame. Their third album, 154 (1979), was extraordinarily creative and compelling. It is still one of my all-time favourites. Over the following 30 years they broke up a couple of times, and made some pretty ordinary albums, but starting in 2009 they’ve released a series of gems. This is the best, and it’s almost as good as 154.

3. A Moon Shaped Pool by Radiohead (2016). Another stunningly beautiful album from the world’s greatest and most creative existing band (noting that The Beatles continue to have regular releases and to top charts with ease without actually existing). This is gentler than their previous albums, but it’s not easy listening.

 

 

4. Blackstar by David Bowie (2016). Speaking of not existing, Bowie died two days after the release of this stunning album, his best since his heyday in the 1970s. It really seems that he intended to go out with the strongest possible creative statement, and he fully delivered.

 

 

5. Have One On Me by Joanna Newsom (2010). There is so much music in this triple album, and not just in terms of duration. These are gorgeous but rather complex songs that take time to really get fully into, but it is so worth putting in the effort. Her beautiful vocals (reminiscent of early Kate Bush) are accompanied by her harp and piano playing, plus a backing band that really understands this music and adds value to it.

 

6. Meltdown: Live in Mexico City by King Crimson (2018). King Crimson have been going for 50 years in a series of radically different incarnations, with the only common denominator being guitarist Robert Fripp. He seemed to retire in 2008, but then in 2014 created a new version of KC with seven members (there are eight on this album), playing wonderfully challenging and virtuosic music from across their whole career, plus new material. In the past five years they’ve put out a bunch of live albums that are all incredible. This triple album is almost an arbitrary choice amongst them, chosen because it includes a high-quality video of one of the concerts from which the album is drawn.

7. National Ransom by Elvis Costello (2010). Elvis claimed for a while that this would be his last studio album. If that had been true, it would have been a great one to go out on. Remarkably eclectic, great songs, great performances throughout, and with Elvis in outstanding voice.

 

 

 

8. The Weatherman by Gregory Alan Isakov (2013). Excellent gentle folky/country album of thoughtful and beatiful songs.

 

 

 

9. The Beatles (White Album) Super Deluxe Version by The Beatles (2018). I should be clear that this ranking of number 9 relates to the bonus material provided with this Super Deluxe Version of the album. The original album from 1968 would be number 1 on my list for any year. The bonus material includes a CD of acoustic demos and three CDs of outtakes from the recording sessions, which provide fantastic insights into the recording of the world’s best-ever album. If you know the album, you’ll know how appropriate it is that it came in at number 9.

10. Jen Cloher by Jen Cloher (2017). I bought this on the strength of a five-star review in The Guardian, and found that that rating is fully justified.

 

 

 

11. Bon Iver by Bon Iver (2011). Not quite as good as the first Bon Iver album, which featured on my list of best albums of the noughties, but much better than his subsequent albums, which go too far into electronics for my taste.

 

 

 

12. Tooth and Nail by Billy Bragg (2013). I’ve loved a few Billy Bragg albums down the years (and that means down more than 40 years), but I think this might be his very best. I see it’s been described as Americana and Alt-Country, and those labels seem to fit.

 

 

 

13. Sometimes I Sit and Think, and Sometime I Just Sit by Courtney Barnett (2015). I first saw Courtney Barnett when she was Billy Bragg’s support act at the Perth Concert Hall in 2014, and I loved what she was doing immediately. That was just a solo gig, but her first album has a full band. The album did not disappoint. It didn’t disappoint a whole lot of other people either, and Courtney rapidly became one of Australia’s most successfuly and best respected music exports. She was nominated for a Grammy Award for best new act. The second time I saw her in concert was when she was the guitar player in Jen Cloher’s band (they are a couple).

14. Schmilco by Wilco (2016). Wilco never disappoints. As well as their great albums, their live shows are fantastic. This gentle album was my favourite Wilco release of the decade.

 

 

 

15. … Like Clockwork by Queens of the Stone Age (2013). This was not a gentle release. The usual high-quality heavy rock approach from QOTSA. Arguably their best collection of songs.

 

 

 

16. Ghosteen by Nick Cave and Bad Seeds (2019). I’m not really a strong Nick Cave fan, and this is such a recent release that it is risky to commit to a ranking, but it is a gorgeous album, in the Nick Cave version of gorgeous. Apparently the songs were inspired by the death of his teenage son.

 

 

 

17. Tomorrow’s Modern Boxes by Thom Yorke (2014). I generally don’t like electronica or dance music much, but there is something compelling about this album, in classic Thom Yorke off-kilter style.

 

 

 

18. Silver/Lead by Wire (2105). Another Wire album on the list. Another great collection of heavy but melodic music.

 

 

 

 

19. Fantastic Guitars by Reeves Gabrels and Bill Nelson (2014). Reeves used to play guitar for David Bowie. Bill played guitar and sang in Be Bop Deluxe. They combine beautifully on this collection of instrumental guitar music.

 

 

 

20. Night Music by Suede (2016). After their hugely successful early phase from 1993 to 2003, Suede broke up. After a long break they reformed and released excellent albums in 2013, 2016 and 2019. This is my favourite of the three. I also recommend the autobiography published by lead singer Brett Anderson in 2018, titled “Coal Black Mornings”.

328 – Weitzman discounting

Martin Weitzman (1942-2019) was an environmental economist who thought laterally. He made important contributions to the field in at least three areas. Here I’ll explain one of his clever insights: that uncertainty about the discount rate has an impact on the effect of discounting.

At a function in his honour in 2018, Weitzman said “I’m drawn to things that are conceptually unclear, where it’s not clear how you want to make your way through this maze,” and described how he “took a decisive step in that direction a few decades ago…getting into the forefront rather than…following everything that went on.”

Martin Weitzman’s Contributions to Environmental Economics

He certainly did get to the forefront! Like a number of other environmental economists I’ve spoken to, I was disappointed that he didn’t win the Nobel Prize in 2018 when his work on climate change and discounting would have made him a perfect co-winner with William Nordhaus.

This PD is about discounting. To follow it, you’ll need to know what discounting is, and how it works. For some simple background, see PD33, and for some insights as to why discounting values from the distant future raises curly questions, see PD34.

You are probably aware that discounting at any rate likely to be recommended by an economist has the (perhaps uncomfortable) result that large benefits in the distant future count for little in the present. While there are arguments for accepting that this is in fact a reasonable and realistic result, it hasn’t stopped people looking for rationales to reduce the discount rate. Some really dodgy reasons have been proposed, including by economists (e.g. the Stern Report), but Weitzman came up with a simple idea that is obviously correct and has an effect equivalent to lowering the discount rate in the long run.

The insight was that, as we think about years further into the future, there is increasing uncertainty about what the discount rate should be in each year. This insight requires two breaks from the way that economists usually think about discount rates. The first is recognising that the appropriate discount rate to use is not necessarily constant over time. I remember thinking that it surely wasn’t constant when I first learnt about discounting, but then I just slipped into assuming that it is constant, like everybody else. Weitzman had the wit to remember that it didn’t have to be constant. [Technical note: I’m not talking about hyperbolic discounting here. In Weitzman’s conceptual model, the discount rate could go up or down from period to period.]

The second break from normal practice was to think about the discount rate for a given year as something that could be uncertain. It obviously is uncertain, but it had hardly ever been treated as such.

When economists want to represent uncertainty quantitatively, we usually do so by defining the value as a subjective probability distribution. To represent a discount rate about which we are increasingly uncertain in the more distant future, we would represent a probability distribution that has a wider variance as time passes.

Having done that, Weitzman showed that an uncertain discount rate is mathematically equivalent to a certain discount rate that declines over time. In the video below I show how this works.

The spreadsheet I use in the above video is available here.

The consequence, as described by Weitzman, is that ‘the ‘‘lowest possible’’ interest rate should be used for discounting the far-distant future part of any investment project’ (Weitzman 1998).

To get the declining-discount-rate result, you don’t even have to assume that uncertainty about the discount rate is increasing over time. As long as the rate is uncertain, even constant uncertainty will give that result.

The idea has been picked up in various ways, including in the guidelines for BCA published by the UK government. They don’t recommend doing all the uncertainty calculations explicitly, but they recommend using a discount rate that declines over time.

Note that to get the “lowest possible” discount rate, he really does mean “far-distant”. He’s talking about dates centuries into the future. The insight doesn’t have big implications for dates within about 50 years, which is about as far as many government Benefit: Cost Analyses go. For what I consider to be realistic representations of discount rate uncertainty, it would mainly affect the results for benefits and costs beyond 50 or 100 years in the future. (See the video for more on this.)

Note that uncertainty about discount rates in the distant future affects the impact of discounting in those distant future years. It doesn’t affect discounting in earlier years. As a result, even if the certainty-equivalent discount rate for year 100 falls to zero (i.e. the value discounted to year 99 is the same as in year 100), the values will still be discounted to express them as present values in year zero. So future benefits still get discounted quite a bit, just a bit less than they would have if you didn’t account for uncertainty. (See the video for more on this as well.)

Of course, the discount rate isn’t the only thing that gets more uncertain as we look further into the future – pretty much everything does. But Weitzman’s insight is still useful and relevant for some investments, even if you explicitly look at other types of uncertainty as well.

When would I suggest using Weitzman discounting? For a BCA that is capturing benefits and costs for 100 years of more. I would recommend combining it with strategies to represent uncertainty about other key variables in the analysis.

In other work, Weitzman focused on the possibility that the end result of climate change could be truly catastrophic. He called it a “fat tailed” problem, for reasons you can read about in Weitzman (2011) and Weitzman (2014). He concluded that this should “make economists less confident about climate change BCA and to make them adopt a more modest tone that befits less robust policy advice” (Weitzman 2011, p.291).

Further reading

Weitzman, M.L. (1998). Why the Far-Distant Future Should Be Discounted at Its Lowest Possible Rate, Journal of Environmental Economics and Management 36, 201-208. Paper * IDEAS page

Weitzman, M.L. (2011). Fat-tailed uncertainty in the economics of catastrophic climate change, Review of Environmental Economics and Policy 5(2), 275-292. Paper

Weitzman, M.L. (2014). Fat Tails and the Social Cost of Carbon, American Economic Review 104(5), 544-546. Paper

327 – Heterogeneity of farmers

Farmers are highly heterogeneous. Even farmers growing the same crops in the same region are highly variable. This is often not well recognised by policy makers, researchers or extension agents.

The variation between farmers occurs on many dimensions. A random sample of farmers will have quite different soils, rainfall, machinery, access to water for irrigation, wealth, access to credit, farm area, social networks, intelligence, education, skills, family size, non-family labour, history of farm management choices, preferences for various outcomes, and so on, and so on. There is variation amongst the farmers themselves (after all, they are human), their farms, and the farming context.

This variation has consequences. For example, it means that different farmers given the same information, the same technology choices, or facing the same government policy, can easily respond quite differently, and they often do.

Discussions about farmers often seem to be based on an assumption that farmers are a fairly uniform group, with similar attitudes, similar costs and similar profits from the same practices. For example, it is common to read discussions of costs and benefits of adopting a new farming practice, as if the costs and the benefits are the same across all farmers. In my view, understanding the heterogeneity of farm economics is just as important as understanding the average.

Understanding the heterogeneity helps you have realistic expectations about how many farmers are likely to respond in particular ways to information, technologies or policies. Or about how the cost of a policy program would vary depending on the target outcomes of the program.

We explore some of these issues in a paper recently published in Agricultural Systems (Van Grieken et al. 2019). It looks at the heterogeneity of 400 sugarcane farmers in an area of the wet tropics in Queensland (the Tully–Murray catchment). These farms are a focus of policy because nutrients and sediment sourced from them are likely to be affecting the Great Barrier Reef. “Within the vicinity of the Tully-Murray flood plume there are 37 coral reefs and 13 seagrass meadows”.

Our findings include the following.

  • Different farmers are likely to respond differently to incentive payments provided by government to encourage uptake of practices that would reduce losses of nutrients and sediment.
  • Specific information about this can help governments target their policy to particular farmers, and result in the program being more cost-effective.
  • As the target level of pollution abatement increases, the cost of achieving that target would not increase linearly. Rather, the cost would increase exponentially, reflecting that a minority of farmers have particularly high costs of abatement. This is actually the result that economists would generally expect (see PD182).

Further reading

Van Grieken, M., Webster, A., Whitten, S., Poggio, M., Roebeling, P., Bohnet, I. and Pannell, D. (2019). Adoption of agricultural management for Great Barrier Reef water quality improvement in heterogeneous farming communities, Agricultural Systems 170, 1-8. Journal web page * IDEAS page

326 – 60-second videos about our research

My School at the University of Western Australia is having a competition amongst staff and students to produce a 60-second video that says something interesting and engaging about our research.

I’ve put in two entries. The first one, about farmer adaptation to climate change, is the fun one.

The second one, about water pollution, is more traditional, but I hope it’s still interesting.

I’m also included in a third really creative entry that was put together by Maksym Polyakov.

Wish us luck. The winner will be announced in December.

Further reading

Thamo, T., Addai, D., Kragt, M.E., Kingwell, R., Pannell, D.J., and Robertson, M.J. (2019). Climate change reduces the mitigation obtainable from sequestration in an Australian farming system, Australian Journal of Agricultural and Resource Economics (forthcoming). Journal web site

Thamo, T., Addai, D., Pannell, D.J., Robertson, M.J., Thomas, D.T. & Young, J.M. (2017). Climate change impacts and farm-level adaptation: Economic analysis of a mixed cropping–livestock system, Agricultural Systems 150, 99-108. Journal web page * IDEAS page

Pannell, D.J. (2017). Economic perspectives on nitrogen in farming systems: managing trade-offs between production, risk and the environment, Soil Research 55, 473-478. Journal web site

Rogers, A.A., Burton, M.P., Cleland, J.A., Rolfe, J., Meeuwig, J.J. & Pannell, D.J. (2017). Expert judgements and public values: preference heterogeneity for protecting ecology in the Swan River, Western Australia, Working Papers 254025, University of Western Australia, School of Agricultural and Resource Economics. IDEAS page

325 – Ranking projects based on cost-effectiveness

Where organisations are unable or unwilling to quantify project benefits in monetary or monetary-equivalent terms, a common approach is to rank potential projects on the basis of cost-effectiveness. Just like ranking projects based on Benefit: Cost Ratio (BCR), this approach works in some cases but not others.

To rank projects based on cost-effectiveness, you choose the metric you will use to measure project benefits, estimate that metric for each project, estimate the cost of each project, and divide the benefit metric by the cost. You end up with a cost-effectiveness number for each potential project, and you use these numbers to rank the projects.

An advantage of this approach is that it sidesteps the challenges of having to measure all the benefits in monetary or monetary-equivalent terms, which is what you have to do calculate a BCR. A disadvantage is that it only works to compare projects that generate similar types of benefits, which can all be measured with the same metric.

Assuming that we are satisfied with your benefits metric and that the projects to be ranked are similar enough, the question is, in what circumstances is it appropriate to rank projects based on cost-effectiveness? (Assuming that the objective is to maximise the overall benefits across all the projects that get funded.) It is logical to ask this given that cost-effectiveness is closely related to the BCR (it has the same structure – it’s just that benefits are measured differently), and we’ve seen in PD322, PD323 and PD324 that ranking projects by BCR works in some situations but not others.

It turns out that the circumstances where it is logical to use cost-effectiveness to rank projects are equivalent to the circumstances where it is logical to rank projects using BCR.

(i) If you are ranking separate, unrelated projects, doing so on the basis of cost-effectiveness is appropriate. Ranking projects by cost-effectiveness implies that there is a limited budget available and you are aiming to allocate it to the best projects.

(ii) If you are ranking mutually exclusive projects (e.g. different versions of the same project), ranking on the basis of cost-effectiveness can be highly misleading. If there are increasing marginal costs and/or decreasing marginal benefits (which are normal), ranking by cost-effectiveness will bias you towards smaller project versions. In PD323, I said to rank such projects by NPV and choose the highest NPV you can afford with the available budget. If we are not monetising the benefits, there is no equivalent to the NPV — you cannot subtract the costs from a non-monetary version of the benefits. This means that, strictly speaking, you cannot rank projects in this situation (mutually exclusive projects) without monetising the benefits. If you absolutely will not or cannot monetise the benefits, what I suggest you do instead is identify the set of project versions that can be afforded with the available budget, and choose the project version from that set that has the highest value for the benefit metric. (Theoretically it should be the project version with the greatest net benefit (benefits – costs) but that is not an option here because in Cost-Effectiveness Analysis the benefits and costs are measured in different units.)

You don’t divide by the costs, but you do use the costs to determine which project versions you can afford. This is a fudge that only makes sense if you adopt the unrealistic assumption that any unspent money will not be available to spend on anything else, but it seems to me to be the best way to go, if monetising the benefits is not an option.

(iii) If you are ranking separate, unrelated projects, and there are multiple versions available for at least one of those projects, then cost-effectiveness does not work and the rule about choosing the highest-value benefit metric does not work either. Instead, you should build an integer programming model to simultaneously weigh up both problems: which project(s) and which project version(s). There is a brief video showing you how to do this in Excel in PD324. In the video, the benefits are measured in monetary terms, but the approach will work if you use non-monetary measures of the benefits.

There are a number of tools available for ranking projects based on cost-effectiveness (e.g. Joseph et al. 2009) but it is important to be clear that the approach only works in certain cases.

Even if you are using cost-effectiveness in the right circumstances (case (i) above), it has a couple of limitations relative to using BCR. One is that you cannot use it to rank projects with distinctly different types of benefits that cannot all be measured with the same metric. Another limitation is that cost-effectiveness provides no evidence about whether any of the projects would generate sufficient benefits to outweigh its costs.

Further reading

Joseph, L.N., Maloney, R.F. and Possingham, H.P. (2009). Optimal allocation of resources among threatened species: a project prioritization protocol. Conservation Biology, 23, 328-338.  Journal web site

Pannell, D.J. (2015). Ranking environmental projects revisited. Pannell Discussions 281. Here * IDEAS page