Communication, Policy, Research

134 – Why is it hard for research to influence policy?

Some research, probably a minority, is conducted with the serious intent of supporting or influencing policy. Within that subset of research, only a minority succeeds in its policy intent. Why is that?

I spend a fair bit of time talking to people involved in government policy at different levels, trying to encourage the use of approaches to decision making that will address questions such as: Will this policy really achieve the desired outcomes? Will the desired behaviour change really occur in response to the planned policy mechanisms? Of the things that money could be spent on in a particular program, which will give the most worthwhile results for the dollars spent?

They seem like common-sense questions that should be of interest to anybody involved, but it often seems extremely difficult to have them taken seriously. Why is that?

There is actually a fair bit of commentary in the research literature about the difficulties that researchers have in influencing policy. A broad range of reasons for poor uptake of research results by policy makers have been identified, including the following.

Non-scientific considerations matter. Sometimes scientific information may be known to policy makers, but the decision reached may still appear to be inconsistent with the science. This may or may not be a concern. It may simply reflect that policy makers and managers consider additional factors, such as legal mandates, societal desires, economic benefits and costs, rights, distributional equity and procedural fairness.

Hidden agendas. There may be political or bureaucratic objectives unrelated to the public interest, so that research that seeks to advance the public interest is not wanted.

Policy fashions and crises. Policy attention tends to be directed to certain issues with high currency, often issues where there is a perceived crisis (e.g. see PD#123), and this may leave little scope for research (or any other input) to influence policy in a policy area that is not currently high on the agenda. We observed this occurring in Australia around 2000 with land and water degradation due to salinity. Now salinity is no longer perceived as a crisis, and policy attention has shifted to another area of perceived crisis: climate change. The fall of salinity as a policy issue is remarkable. At one point it dominated all discussions about the environment in Australia, and now it seems to be hardly mentioned outside Western Australia. Even when an issue is perceived to be in crisis, it doesn’t necessarily follow that research results will be noticed and acted on. For example, research that challenges preconceptions will tend to be dismissed, especially when there is a lot of momentum behind a particular view of the issue.

Timing issues. “Information must be timely to be useful” (Jacobs, 2002, p.9). “Policy generally moves faster than science, and the capacity of science to provide information may require more time than policy makers are willing to accept, especially for politically hot issues” (Clark et al., 1998, p.9).

Difficulty getting access. Policy makers often rely primarily on locally based and trusted experts with whom they are familiar. This is understandable, given the flood of information that policy makers can face on some issues, but it does not ensure that the most appropriate information is used, and it can make it very hard for new researchers or outside researchers to be heard.

Distrust. There may be suspicion about motivations of scientists, so that they are treated as just another interest or lobby group. The growth of public advocacy by some high-profile scientists feeds these suspicions. Even without overt advocacy, some environmental scientists tend to intertwine facts and values, and this affects the perceived independence of their scientific advice.

Incentives facing researchers. It may be that scientists are not conducting research that is relevant to policy, or not making efforts to make their science known to policy makers. “University reward systems rarely recognize inter-disciplinary work, outreach efforts, and publications outside of academic journals, which limits the incentives for academics to participate in real-world problem solving and collaborative efforts” (Jacobs, 2002, p.14). This probably helps to explain the observed lack of so-called “boundary spanners”, these being individuals who can link the worlds of science and management and translate the concerns of one to members of the other (Clark et al., 1998).

Communication problems. Research findings may be communicated in ways that policy makers cannot understand, using jargon, technical language, or mathematics. Conversely, policy makers also have a comprehensive set of jargon, and may have trouble expressing their information needs in ways that researchers can respond to.

Lack of expertise. In some situations there is rapid turnover or movement of staff in government policy agencies, leading to lack of expertise by responsible staff and lack of knowledge of relevant science and scientists. A culture may develop in government agencies that detailed subject knowledge is not necessary.

I believe that understanding of the adoption of new policy ideas by policy makers can be enhanced by a knowledge of the research literature on adoption of innovations (e.g. Pannell et al. 2006). Here are some generalisations from that literature that are relevant to the policy sphere:

  1. Most potential adopters considering an innovation are sensibly cautious. They do not rush in, but seek information to improve their eventual decision about the innovation.
  2. Where decision makers do not have personal experience with an innovation, they rely to some extent on external sources of information. As decision makers gain personal experience, this tends to have a dominant influence on their perceptions and their actual behaviour.
  3. External sources of information are given more or less weight depending on factors such the expertise and credibility of the information source, the relevance of the external information to the decision maker’s circumstances, and the number of external sources reinforcing the message with similar information.
  4. Apparently misguided decisions to adopt or not adopt an innovation can often be easily understood and seen as reasonable if one makes the effort to learn about the objectives and perceptions of the individual decision makers involved.
  5. Many factors influence the speed of adoption of an innovation. Key ones include: the extent to which adopting the innovation is superior to maintaining existing practice; the ease with which the innovation can be observed and evaluated; the number of other potential adopters who have already adopted it; and the intensity and quality of efforts to promote the innovation.

David Pannell, The University of Western Australia

Further Reading

Pannell, D.J. and Roberts, A.M. (2008). Conducting and delivering integrated research to influence land-use policy: an Australian case study, INFFER Working Paper 0803, University of Western Australia, Perth (submitted to Environmental Science and Policy). Full paper (70K).

Pannell, D.J., Marshall, G.R., Barr, N., Curtis, A., Vanclay, F. and Wilkinson, R. (2006). Understanding and promoting adoption of conservation practices by rural landholders. Australian Journal of Experimental Agriculture 46(11): 1407-1424. If you or your organisation subscribes to the Australian Journal of Experimental Agriculture you can access the paper at: http://www.publish.csiro.au/nid/72/paper/EA05037.htm (or non-subscribers can buy a copy on-line for A$25). Otherwise, email David.Pannell@uwa.edu.au to ask for a copy.

Clark, R.N.; Meidinger, E.E., Miller, G., Rayner, J., Layseca, M., Monreal, S., Fernandez, J., and Shannon, M.A., 1998. Integrating science and policy in natural resource management: lessons and opportunities from North America. Gen. Tech. Rep. PNW-GTR-441. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 22 p. Here.

Jacobs, K (2002). Connecting science, policy and decision-making: a handbook for researchers and science agencies. NOAA Office of Global Programs, Boulder, Colorado. Here.

Acknowledgement: The article draws on various published papers, particularly those by Sue Briggs, Michael Carolan, Lonnie King and those by Clark et al. and Jacobs cited above. See the paper by Pannell and Roberts (2008) for full details.