Yes, that’s right: the idea of measuring the “impact” of research is back in a big way. Within the research community and within government, plenty of people are thinking about this in 2012.
As many have acknowledged, the Federal Government’s current Excellence in Research for Australia (ERA) initiative provides a strong evaluation of the quality of the research conducted in Australian universities, but doesn’t necessarily tell us much at all about the impacts of this research in the broader community.
The government’s 2011 review, Focusing Australia’s Publicly Funded Research, recommended a feasibility study be undertaken by the Department of Industry, Innovation, Science, Research and Tertiary Education on “possible approaches for developing a rigorous, transparent, system wide Australian research impact assessment mechanism”.
This will build upon work already underway across the university sector and in CSIRO.
Making more of an effort to understand how research interacts with the broader community is – to state our opinion up front – A Good Thing. It promotes thinking about the outside world – encouraging engagement beyond a particular academic discipline and awareness of the interests of the people actually funding our work, and the issues they might deem important.
It also focuses effort on clearly articulating the many ways in which our investments in research deliver benefits for society.
Yet perhaps in this nascent discussion about impact we have put the cart before the horse. Perhaps we have allowed the conversation to get away from us before we’ve had a chance to think through what it is we actually want to achieve in our governance of the Australian research system, and what we want to measure and reward. When it comes down to it, is “impact” even the right word?
“Impact” sounds like a concept from the world of physics – a scientisation of the very language we might use to talk about research and its place in society. “Impact” seems to denote a process that can be rational, can be measured – where bigger would equal better.
It also seems to describe a singular effect from research activity – someone does lots of work, and then there is an impact. Bang. Done.
But isn’t the age of linear cause and effect supposed to be over? Aren’t we supposed to be living in a more complicated, more contingent age of overlapping fields, where innovation happens at the boundaries?
To talk of “impact” in a singular, physical way is to slip back to a simple linear model of research and innovation. The dominant measures of the “impact” of research and innovation – dollars, people, publications and patents – still reinforce this model.
The problem is, decades of research on research and innovation have shown that the process is neither this simple nor this linear.
And, of course, impact isn’t either. Research is part of, and contributes to, the complicated and overlapping worlds of human affairs. It shapes, and is shaped by, broader society. The tentacles of impact stretch into the past and far off into the future.
Which is not to say impact cannot be measured at all. We believe there are many opportunities to enhance the metrics of research and innovation, and that this is important work – it is crucial that individual researchers, research organisations and governments are engaged in the discussion.
But there are two key points – often overlooked – that must frame how this work progresses.
1) The new knowledge and new tools that stem from research do not create singular, one-off “impact”. Research activity leads to multiple impacts in different locations and different times.
2) Some of these impacts will be seen as positive by certain people in certain places and times, while others will be seen as neutral or even negative.
The word “impact” itself contains no normative assessment, yet many seem to be using it as a synonym for benefit. If we are going to assess research impact systematically, we will need to start to account for multiple impacts.
Consider the impact of the development of the cochlear implant. Hundreds of thousands of people have become able to hear, living lives that are (probably) easier and (possibly) richer. How would we measure this?
Much money has been made, and many jobs created. Simultaneously, many in the deaf community have come to see the technology as a form of “cultural genocide”. Should this be taken into account when assessing impact?
Researchers have also studied the introduction of new agricultural technologies, such as the tomato harvester, and their social, economic and environmental impacts.
While productivity and profitability rose with the introduction of certain technologies, this was also accompanied by job losses among certain classes of workers and the restructuring of farm holdings, gender roles in the workforce, and regional communities.
All of this raises important questions of accountability. Individual researchers would rightly be nervous about being measured and rewarded against such broad, long-term impacts, over which they have little or no control. So who should be held accountable for what?
If we are seeking to improve our assessment of the impacts of research in the wider community, what is the role for researchers and research organisations, and what is the role for government and the public?
Perhaps we should start by not jumping straight to “impact”. It’s not a simple linear process, but there are some things that happen between research and societal impacts, and perhaps these are things we should start to talk about and measure more.
Things such as “engagement” and “use”, and “relevance” and “appropriateness”. We need to pair the quantitative with the qualitative as we seek to better understand impacts, and develop new measures of engagement and use that go beyond our current – largely scientific and economic – metrics.
It might prove difficult, or even impossible, to answer the question about the full, long-term impacts of a particular piece of research, but it’s important that questions are being asked.
If we stop looking for one single big answer and focus instead on smaller steps along the way, there is a lot that can be done.