274 – Tokenistic policies

Many government actions are tokenistic. They are too small to really make a difference, but they are pursued anyway. Why do governments do this, and how do they get away with it without provoking public anger?

Listening to ABC Radio National’s breakfast program this week, I heard an interesting interview with Professor Hugh White from the Strategic and Defence Studies Centre at the Australian National University. He was arguing that the current response to the IS threat in Syria and Iraq is too small and constrained to achieve any significant impact on the progress of IS.

“If you find yourself, as I think we do today, undertaking military operations without making them big enough to give yourself a reasonable chance of success, you’re just going through the motions and you’re better off not doing it.”

“Going through the motions doesn’t make strategic sense and I don’t think it makes moral sense either.”

jetWhat struck me about this argument was its similarity to my own argument about some environmental investments by governments. Starting with dryland salinity, I argued that our investment was spread too thinly across too many investments for any of them to be successful. Reinforcing this, the Australian National Audit office concluded that the level of change in land management in well-monitored cases was about one percent of the level needed to achieve stated targets.

More recently, I’ve been researching other aspects of water quality (nutrients and sediment) and there too governments tend to hugely under-fund projects. For example, funding to protect the Gippsland Lakes in Victoria from nutrient and sediment pollution is around 2% of the level that would be needed to achieve the official target of a 40% reduction. (See PD210)

One question is, why do governments do this? The reasons probably vary from case to case, but I think there are two main factors. The first is, to be seen to be doing something. At least in some cases, the government realises that the funding allocated is woefully inadequate, but they proceed with the policy anyway because they think there is electoral advantage in being seen to be doing something, rather than nothing. So this is a cynical political motive.

In other cases, I think the reason is ignorance, combined with a lack of evidence and analysis in the policy-development phase, combined with a tendency towards excessive optimism about the effectiveness of a proposed policy (PD213). That was the problem with the salinity policy. Lots of people thought it was a good idea to have a policy to combat such a prominent national problem, but very few people had enough knowledge of the science and economics of salinity to recognise that the policy was badly misconceived and would achieve little. The policy approach adopted was an evolution of earlier programs (the National Landcare Program and the Natural Heritage Trust) rather than one designed after careful analysis of what it would really take to substantially reduce the impacts of salinity.

This second reason is, perhaps, less offensive than raw cynical politics, but it’s still terrible.

Another interesting question is, how do they get away with it? Why is there not more public anger directed at these politically motivated or ill-conceived policies? Here are some possibilities.

Complexity. The issues I’ve talked about are complex and multi-faceted. It can be difficult even for experts to work out what policy response would be most effective. Most people lack the expertise to judge whether any particular policy response will be effective. They don’t have the time or inclination to learn enough to make those judgements. They therefore trust governments to do what they say they are doing.

Time lags. For some of these issues, the effects of current management would not be felt for some time – years or even decades in the future. By then, it’s hard to make the connection back to policies that were put in place previously, and judge whether they made a positive difference.

Intractability. Some of these problems could be solved but only at exorbitant expense, while others can’t be solved at all in any practical sense. I suspect that governments sometimes recognise this and then implement the least costly policy they think they can get away with politically.

Communication challenges. I was interested that, in her interview with Hugh White, the program’s host Fran Kelly did not pursue questions about the tokenistic nature of the policy, focusing instead on other issues. Perhaps she felt the argument was too complex or subtle to be comprehended by people eating their Weet Bix. Or perhaps she herself didn’t recognise its significance.

Sometimes an underfunded policy does explode into political controversy because of its ineffectiveness, but usually they don’t. Normally, they drift along, spending money and going nowhere much. They might receive an adverse review from some government committee or inquiry, but governments tend not to respond substantively to those sorts of reviews if they think they can get away with it.

Overall, policy tokenism is an understandable but regrettable aspect of our system of democratic government. It is hard to combat, but sometimes can be changed by outside pressure, either from the public or from vocal expert commentators.

Further reading

Pannell, D.J. and Roberts, A.M. (2010). The National Action Plan for Salinity and Water Quality: A retrospective assessment, Australian Journal of Agricultural and Resource Economics 54(4): 437-456. Journal web site here ♦ IDEAS page for this paper

Roberts, A.M. Pannell, D.J. Doole, G. and Vigiak, O. (2012). Agricultural land management strategies to reduce phosphorus loads in the Gippsland Lakes, Australia, Agricultural Systems 106(1): 11-22.    Journal web site here ♦ IDEAS page for this paper

273 – Behaviour change comes in pairs

Some key factors that drive adoption of new practices come in pairs: one aspect related to the performance of the new practice, and one aspect related to how much people care about that performance. Many models of adoption miss this, including famous ones.

Whatever work or hobbies we do, there are regularly new practices coming along that we are encouraged to adopt: new technologies (e.g. a new iPhone, an auto-steer crop harvester), or different behaviours (e.g. reducing our usage of energy or water, changing the allocation of land to different crops).

The agricultural examples above reflect that some of my research is on adoption of new practices by farmers, but the issue I’m talking about today is relevant in all spheres where people adopt new practices.

It is well recognised that people vary in the personal goals that drive their choices about whether to adopt new practices that are promoted to them. Amongst commercial farmers, for example, there are differences in the emphases they give to profit, risk and environmental outcomes.

Any attempt to understand or model adoption of new practices needs to recognise the potential importance of these different goals. Many studies do include variables representing these three goals, and sometimes others.

However, it is less often recognised that there are two aspects to each of these goals when looking at a new practice:

  1. The extent to which the new practice would deliver the outcome measured by that goal: more profit, less risk, or better environmental outcomes.
  2. How much the decision maker cares about those particular outcomes.

These two aspects are closely linked. They interact to determine how attractive a new practice is, but they are distinctly different. One is not a proxy for the other.

extension 1For example, suppose a farmer is considering two potential new practices for weed control. The farmer judges that new practice A is much riskier (less reliable) than new practice B.

How much will this affect the farmer’s decision making? That depends on the farmer’s attitude to risk. For a farmer who has a strong aversion to risk, practice B will be strongly favoured, at least from the risk perspective. (Other goals will probably also come into play as well.) For a farmer who doesn’t care about risk one way or the other, the difference in riskiness between practices A and B is of no consequence. Some farmers (a minority) have been found to be risk-seeking, so they would prefer practice A.

The same sort of pattern occurs with other goals as well. The attractiveness of a new practice depends on how much difference it makes to profit and on how strongly the farmer is motivated by profit. Or how much it affects the environment and how strongly the farmer cares about the environment.

Amongst the thousands of research studies of farmer adoption of new practices, most represent only one goal-related variable where two are needed. For example, they include a measure of risk aversion, but ignore differences in the level of riskiness of the new practice amongst different adopters. Or they represent differences in the profitability of the new practice, but not differences in how much the adopters care about profit.

It doesn’t help that the issue is not recognised in common conceptual frameworks used by social scientists studying adoption behaviour, such as the Theory of Reasoned Action (Fishbein and Ajzen 1975) and the Theory of Planned Behaviour (Ajzen 1991).

It should be recognised in a sound economics framework (e.g. Abadi Ghadim and Pannell 1999 do so for risk), but it often isn’t included in the actual numerical model that is estimated.

The only framework I’ve seen that really captures this issue properly is our framework for ADOPT – the Adoption and Diffusion Outcome Prediction Tool. Hopefully this insight can diffuse to other researchers over time.

Further reading

Abadi Ghadim, A.K. and Pannell, D.J. (1999). A conceptual framework of adoption of an agricultural innovation, Agricultural Economics 21, 145-154. Journal web page ◊ IDEAS page

Ajzen, I. (1991). The theory of planned behavior, Organizational Behavior and Human Decision Processes 50, 179-211.

Fishbein, M. and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley.