Monthly Archives: October 2012

227 – ‘Disadoption’ after a project ends

There are various programs and projects around the world that aim to encourage farmers to adopt a new practice of one sort or another. It’s not uncommon to observe farmers participating in such projects, but then reverting to their old practices once the project ends. What are the implications of this?

If a program has a limited life, it is usually most realistic to assume that funding for projects will be temporary. Examples include Australia’s national natural resource management programs (Caring for our Country, the Natural Heritage Trust, and the National Action Plan for Salinity and Water Quality) which provided one-off funding for projects, usually three years or less. Assuming that we want benefits from these programs to be enduring (which we surely do), we would seek to avoid the sort of scenario I outlined above, where farmers abandon the new practices once the flow of program money ends.

This implies that these programs should be careful in targeting their resources to promotion of practices that that are expected to provide positive net benefits to the target farmers. That is, they are practices that, once the farmers learn about them, will be attractive enough to be continued without ongoing support.

This sort of thinking seems to me to have been completely absent from the above programs, and from many other temporary programs around the world. For example, last week I attended an interesting workshop on Conservation Agriculture in Africa and South Asia, and there seem to have been many examples in that space of temporary ‘adoption’ that was abandoned once projects ended.

Once this has occurred, the logical response is to cease any further efforts to promote the activity, unless you have strong reasons to expect that the circumstances have changed significantly. Examples of relevant changes could include: a new version of the practice has been developed that will perform better for these farmers, a policy barrier to its adoption has been removed, or commodity prices have changed in a way that makes the practice more attractive.This sort of ‘disadoption’ actually gives us powerful insights into the practice that was being promoted. The farmers have tried out the practice in their own context, and decided to stop doing it, so they are making a relatively well-informed judgement that the practice does not suit them. This is clearer and more powerful than simply observing that a practice has never been adopted in an area. If it has never been tried out, you probably can’t be sure that it wouldn’t work if it was tried. But if it has been tried and then abandoned, you can be relatively sure about it.

Unfortunately, this sort of common-sense response often doesn’t occur. In the national salinity program we found cases where farmers had been paid repeatedly to ‘adopt’ perennial pasture, but had ‘disadopted’ it each time. In Africa, relatively untargeted promotion of Conservation Agriculture has persisted despite it being well known that ‘adoption’ often evaporates once programs end.

A key understanding is that participation in these sorts of programs does not actually constitute adoption. From the farmer’s perspective, it’s really a case of farmers trialing the practice to see if it works sufficiently well for them. (That’s why I’ve put ‘adoption’ in quotes above.) The benefit of the program is that it allows farmers to make better-informed decisions about adoption, whether or not those decisions are to adopt the practice.

The other implication is that funding that would have been spent on promoting non-adoptable practices should be diverted to other uses. That could include promoting those practices to farmers who have been carefully assessed as being  likely to adopt after trialing, or focusing on ways to improve the attractiveness of the practices, instead of promoting them in their current form.

Further reading

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics54(4): 437-456. Journal web site here ♦ IDEAS page for this paper

Pannell, D.J., Marshall, G.R., Barr, N., Curtis, A., Vanclay, F. and Wilkinson, R. (2006). Understanding and promoting adoption of conservation practices by rural landholders. Australian Journal of Experimental Agriculture 46(11): 1407-1424.

If you or your organisation subscribes to the Australian Journal of Experimental Agriculture you can access the paper at: (or non-subscribers can buy a copy on-line for A$25). Otherwise, email to ask for a copy.

Also see

226 – Modelling versus science

Mick Keogh, from the Australian Farm Institute, recently argued that “much greater caution is required when considering policy responses for issues where the main science available is based on modelled outcomes”. I broadly agree with that conclusion, although there were some points in the article that didn’t gel with me. 

In a recent feature article in Farm Institute Insights, the Institute’s Executive Director Mick Keogh identified increasing reliance on modelling as a problem in policy, particularly policy related to the environment and natural resources. He observed that “there is an increasing reliance on modelling, rather than actual science”. He discussed modelling by the National Land and Water Resources Audit (NLWRA) to predict salinity risk, modelling to establish benchmark river condition for the Murray-Darling Rivers, and modelling to predict future climate. He expressed concern that the modelling was based on inadequate data (salinity, river condition) or used poor methods (salinity) and that the modelling results are “unverifiable” and “not able to be scrutinised” (all three). He claimed that the reliance on modelling rather than “actual science” was contributing to poor policy outcomes.

While I’m fully on Mick’s side regarding the need for policy to be based on the best evidence, I do have some problems with some of his arguments in this article.

Firstly, there is the premise that “science and modelling are not the same”. The reality is nowhere near as black-and-white as that. Modelling of various types is ubiquitous throughout science, including in what might be considered the hard sciences. Every time a scientist conducts a statistical test using hard data, she or he is applying a numerical model. In a sense, all scientific conclusions are based on models.

I think what Mick really has in mind is a particular type of model: a synthesis or integrated model that pulls together data and relationships from a variety of sources (often of varying levels of quality) to make inferences or draw conclusions that cannot be tested by observation, usually because the issue is too complex. This is the type of model I’m often involved in building.

I agree that these models do require particular care, both by the modeller and by decision makers who wish to use results. In my view, integrated modellers are often too confident about the results of a model that they have worked hard to construct. If such models are actually to be used for decision making, it is crucial for integrated modellers to test the robustness of their conclusions (e.g. Pannell, 1997), and to communicate clearly the realistic level of confidence that decision makers can have in the results. In my view, modellers often don’t do this well enough.

But even in cases where they do, policy makers and policy advisors often tend to look for the simple message in model results, and to treat that message as if it was pretty much a fact. The salinity work that Mick criticises is a great example of this. While I agree with Mick that aspects of that work were seriously flawed, the way it was interpreted by policy makers was not consistent with caveats provided by the modellers. In particular, the report was widely interpreted as predicting that there would be 17 million hectares of salinity, whereas it actually said that there would be 17 million hectares with high “risk” or “hazard” of going saline. Of that area, only a proportion was ever expected to actually go saline. That proportion was never stated, but the researchers knew that the final result would be much less than 17 million. They probably should have been clearer and more explicit about that, but it wasn’t a secret.

The next concern expressed in the article was that models “are often not able to be scrutinised to the same extent as ‘normal’ science”. It’s not clear to me exactly what this means. Perhaps it means that the models are not available for others to scrutinise. To the extent that that’s true (and it is true sometimes), I agree that this is a serious problem. I’ve built and used enough models to know how easy it is for them to contain serious undetected bugs. For that reason, I think that when a model is used (or is expected to be used) in policy, the model should be freely available for others to check. It should be a requirement that all model code and data used in policy is made publicly available. If the modeller is not prepared to make it public, the results should not be used. Without this, we can’t have confidence that the information being used to drive decisions is reliable.

Once the model is made available, if the issue is important enough, somebody will check it, and any flaws can be discovered. Or if the time frame for decision making is too tight for that, government may need to commission its own checking process.

This requirement would cause  concerns to some scientists. In climate science, for example, some scientists have actively fought  requests for data and code. (Personally, I think the same requirement should be enforced for peer-reviewed publications, not just for work that contributes to policy. Some leading economics journals do this, but not many in other disciplines.)

Perhaps, instead, Mick intends to say that even if you can get your hands on a model, it is too hard to check. If that is what he means, I disagree. I don’t think checking models generally is harder than checking other types of research. In some ways it is easier, because you should be able to replicate the results exactly.

Then there is the claim that poor modelling is causing poor policy. Of course, that can happen, and probably has happened. But I wouldn’t overstate how great a problem this is at the moment, because model results are only one factor influencing policy decisions, and they often have a relatively minor influence.

Again, the salinity example is illustrative. Mick says that the faulty predictions of salinity extent were “used to allocate funding under the NAP”. Apparently they influenced decisions about which regions would qualify for funding from the salinity program. However, in my judgement, they had no bearing on how much funding each of the 22 eligible regions actually received. That depended mainly on how much and how quickly each state was prepared to allocate new money to match the available Federal money, coupled with a desire to make sure that no region or state missed out on an “equitable” share (irrespective of their salinity threat).

The NLWRA also reported that dryland salinity is often a highly intractable problem. Modelling indicated that, in most locations, a very large proportion of the landscape area would need to be planted to perennials to get salinity under control. This was actually even more important information than the predicted extent of salinity because it ran counter to the entire philosophy of the NAP, of spreading the available money thinly across numerous small projects. But this information, from the same report, was completely ignored by policy makers. The main cause of the failure of the national salinity policy was not that it embraced dodgy modelling about the future extent of salinity, but that it ignored much more soundly based modelling that showed that the strategy of the policy was fundamentally flawed.

Mick proposes that “Modellers may not necessarily be purely objective, and “rent seeking” can be just as prevalent in the science community as it is in the wider community.” The first part of that sentence definitely is true. The last part definitely is not. Yes, there are rent-seeking scientists, but most scientists are influenced to a greater-or-lesser extent by the explicit culture of honesty and commitment to knowledge that underpins science. The suggestion that, as a group, scientists are just as self-serving in their dealings with policy as other groups that lack this explicit culture is going too far.

Nevertheless, despite those points of disagreement, I do agree with Mick’s bottom line that “Governments need to adopt a more sceptical attitude to modelling ‘science’ in formulating future environmental policies”. This is not just about policy makers being alert to dodgy modellers. It’s also about policy makers using information wisely. The perceived need for a clear, simple answer for policy sometimes drives modellers to express results in a way that portrays a level of certainty that they do not deserve. Policy makers should be more accepting that the real world is messy and uncertain, and engage with modellers to help them grapple with that messiness and uncertainty.

Having said this, I’m not optimistic that it will actually happen. There are too many things stacked against it.

Perhaps one positive thing that could conceivably happen is adoption of Mick’s recommendation that “Governments should consider the establishment of truly independent review processes in such instances, and adopt iterative policy responses which can be adjusted as the science and associated models are improved.” You would want to choose carefully the cases when you commissioned a review, but there are cases when it would be a good idea.

Some scientists would probably argue that there is no need for this because their research has been through a process of “peer reviewed” before publication. However, I am of the view that peer review is not a sufficient level of scrutiny for research that is going to be used as the basis for large policy decisions. In most cases, peer review provides a very cursory level of scrutiny. For big policy decisions, it would be a good idea for key modelling results to be independently audited, replicated and evaluated.

Further reading

Keogh, M. (2012). Has modelling replaced science? Farm Institute Insights 9(3), 1-5.

Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16(2): 139-152. Full paper here ♦ IDEAS page for this paper

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics54(4): 437-456. Journal web site here ♦ IDEAS page for this paper

225 – It was 50 years ago today

October 5, 2012 marks the 50th anniversary of The Beatles’ first proper release. The single Love Me Do came out in England to no outstanding acclaim, enjoying moderate chart success. Like most aspects of The Beatles’ career, there is a great story behind it.

The year 1962 started with The Beatles failing an audition for a recording contract with Decca. The man who turned them down, Dick Rowe, signed Brian Poole and the Tremilos instead! Imagine the regret he’s lived with ever since. However, if you listen to what the Beatles recorded for Decca that day (available on various semi-legal releases, and in part on ‘Anthology 1′), his decision is understandable. The recordings are fascinating in the light of what came later but, in truth, they aren’t very good, and some parts are cringe-worthy. Listening to it, it would have been hard to imagine any particularly notable success for the band.

This was the most well-known of numerous knock-backs they received. Pretty much every record label in England turned them down, including Parlophone, the label they would end up on. George Martin, the head of Parlophone, heard nothing that interested him in the Decca audition recordings when they were played to him by Beatles’ manager Brian Epstein. However, some months later, Martin was forced to offer them a recording contract by his boss, who was responding to enormous pressure from their in-house publishing company, which wanted to publish the songs of Lennon and McCartney.

Their first Parlophone recording session on June 5 was different from the Decca episode in several ways. For one thing, from the two songs that survive from the session, they sound much better – more together and fuller. Secondly, three of the four songs recorded were Lennon and McCartney originals, including Love Me Do, whereas for Decca they mainly recorded cover versions. Thirdly, they had a recording contract – it was not an audition.

After the recording session Martin’s main negative comment was that he didn’t think much of their drummer, Pete Best. This precipitated one of the most discussed episodes in the band’s career: Best’s sacking and replacement by Ringo Starr, right on the cusp of their breakthrough to massive success. This was really tough on Best. He had paid his dues thoroughly with the band, performing in hundreds of shows with them, and being a key part of their evolution from rank amateurs to a band ready for extreme greatness. He had suffered with them through two years of extremely poor playing conditions and long, long hours in the seedy clubs of Hamburg. Best was also a popular band member amongst Beatles fans, due to his moody good looks – a sort of James Dean character. (You can get some hint of that in the above photo – Best is on the left).

The other element of the controversy was the way he found out the news – from manager Brian Epstein, rather than from his fellow band members. It was terribly cowardly of them, really.

So why was he sacked? A range of reasons have been proposed.

(a) He didn’t fit in well with the other three Beatles, who were extremely tight-knit and shared a distinctive sense of humour, and a distinctive hair-do which Best refused to adopt.

(b) The others were jealous of his popularity.

(c) He was a poor drummer.

I think reason (b) is implausible. A popular, handsome member is an asset to a pop band. Each of them had an avid individual following, anyway, even in those early days. This reason was made up by fans in Liverpool who lacked the musical knowledge to judge reason (c).

The Beatles themselves referred to reason (a) in the ‘Anthology’ TV series, and it certainly was part of it. Best kept himself apart from the other three, choosing not to socialise with them.

But the main reason was (c). You can hear Best’s drumming on various historical releases from those very early days, and it is generally pretty bad. In particular, on that first recorded version of Love Me Do from June 1962 (released in 1995 on ‘Anthology 1’), the drumming is absolutely terrible. No wonder Martin didn’t like it. It must have grated on John, Paul and George, who were solid musicians (or much better than solid in Paul’s case). It would have been absolutely obvious to them that Best was their musical weak link. Replacing him with Ringo gave them a much tighter more professional sound, and he was much more engaged and lively on stage.

So the first set of recordings was shelved. When The Beatles returned to the Abbey Road recording studio on September 4 to have another go at recording their first single, Ringo was the drummer, having quit his existing band with only a couple of days’ notice. George was sporting a black eye, received from a Beatles fan protesting about Best’s sacking (or another theory is that it was a local tough jealous of his girlfriend’s interest in the band). George’s head is intentionally angled away from the camera in the photo from the session (left – he is second from the left) to hide his shiner.

There were two songs in contention for the single A side: Love Me Do and a song that George Martin preferred, How Do You Do It?, which was not a Beatles original. After the sessions, The Beatles argued strongly that they wanted to go with Love Me Do rather than covering somebody else’s song. This speaks volumes about their confidence even then. These days, it is expected for bands to write their own material, but that is purely a result of The Beatles. It was almost unheard of in those days. And to resist the preferences of their producer, who would have been used to getting his own way, was also courageous. What’s more, George Martin’s judgement about the commercial potential of the other song was spot on. It was later a number 1 hit for Gerry and the Pacemakers.

But the Beatles hated How Do You Do It? – it was far too wimpy for them. So, right from the start, The Beatles demanded creative freedom from their producer. George Martin really didn’t want to allow it – he still didn’t think much of Love Me Do. However, when the songwriter of How Do You Do It? heard the Beatles recording of it, he hated what they had done to it, and refused to allow them to release it, leaving Martin with no choice but to make Love Me Do the A side.

Because How Do You Do It? was ruled out, they now lacked a recording for the B side. So they went back to Abbey Road a week later to record P.S. I Love You. This marked another dramatic moment, as the Parlophone producer (not George Martin this time, but a stand-in named Ron Richards) had engaged a session drummer, Andy White, to replace Ringo. This was really unnecessary and unfair on Ringo. His playing on the second version was fine, although he’d had some difficulties during the second recording session. You can’t hear Ringo’s difficulties in the recording (it is available on ‘Past Masters Volume 1’), but there were other more obvious weaknesses; Paul’s singing was a little bit off in one or two places, and the bass guitar was slightly out of tune. So it was fortunate that they had this third session, as it gave them the opportunity to record Love Me Do for a third time (once they’d finished recording P.S. I Love You), and this time they nailed it. Ringo was gutted not to be playing on it, but he dutifully played tambourine along with the session drummer. This certainly is the best version of the three, due to better singing, the bass guitar tuning and the overall mix – nothing to do with the drumming.

Ironically, when the single came out on October 5, due to a mix-up, it contained the second version with Ringo on drums. It was switched to the third version on the ‘Please Please Me’ album and on subsequent pressings of the single. To avoid that mistake happening again, EMI destroyed the master tape of the Ringo version. They had to take it from a copy of the first pressing of the single to include it on ‘Past Masters Volume 1’.

The song peaked at number 17 on the English charts. To me, this seems about right. As a song, Love Me Do is OK but not great – probably their weakest single. Mind you, it  could have gone higher if Parlophone had promoted it properly. George Martin felt it had no prospects of making the charts, and left it to sink. Only after it made the top 20 anyway did he realise that he’d misjudged it and come fully on side as an enthusiastic ally.

Love Me Do is one of only a few of their very early compositions that they actually recorded for EMI. (others included One After 909, I’ll Follow the Sun and When I’m 64.) Interestingly, they had written dozens of songs together in 1957-59, but hadn’t written any at all in the two years before Brian Epstein became their manager at the end of 1961. In their shows they were solely a covers band until they got the recording contract, and for some reason they were kicked into action as composers at that time (probably encouraged by Epstein).

Love Me Do wasn’t released in the US at that time. By the time it was, in 1964, The Beatles were totally dominating the music industry, and it went to number 1 as a matter of course.

After Love Me Do, the pace of The Beatles’ improvement in song writing and recording was unbelievable, reaching something close to perfection just 4-5 years later with the Penny Lane/Strawberry Fields Forever single and then the ‘Sgt Pepper’s Lonely Hearts Club Band’ album. Play Love Me Do and Strawberry Fields Forever back to back and be amazed. From simple pop song to an astonishingly great piece of art in four years.

P.S. 27 December 2013. I have extensively edited the piece to bring it into line with the correct history, as described in the fantastic new biography “Tune In” by Mark Lewisohn. His account corrects a number of errors that have been perpetuated in all previous biographies. For example, he clarifies that the first session for Parlophone was not an audition – they already had a contract by then. Also I had no idea that George Martin signed them against his will. That’s amazing. It is also the first book to be really clear about why Pete Best was sacked. I’ve always known he was a terrible drummer with awful timing – just listen to the recordings! But no previous book had made clear to non-musician readers how terrible he was. They can have no doubt after reading this book. Here are four examples that make the point eloquently.

Early Beatles collaborator Tony Sheridan said, “Pete was a crap drummer, you can take my word for it. He was just not competent, and there were discrepancies between his feet and his hands.”

In an earlier recording session in Germany, the producer didn’t think Pete’s drumming was good enough for recording. He physically removed the bass drum and tom-tom drum from his kit in an attempt to keep him on time! The engineer from the session was quoted saying “the drummer is just bloody awful”.

The engineer from the Decca audition session said, “I thought Pete Best was very average and didn’t keep good time. You could pick up a better drummer in any pub in London. … If Decca was going to sign the Beatles, we wouldn’t have used Pete Best on the records.”

Ron Richards (main producer on first Parlophone session), said to George Martin, “He’s useless; we’ve got to change this drummer.”

Clearly, there was absolutely no way Best could stay in the Beatles. Best himself didn’t accept that his drumming was not up to scratch, and repeatedly claimed that there must have been some other reason for his sacking. That he could listen to those recordings and think they were OK just reinforces how unsuitable he was.