End robo-research assessment

Share:

\"\"

Some clever and thoughtful people at the American Society for Cell Biology have done us all a favor by putting in writing something that is so good and so true that I’m delighted by it. The Journal Impact Factor has gone from being a rough measure of relative journal significance to being the measure of researchers, something it was never designed for and something it does badly. The Declaration on Research Assessment (DORA) is intended as a “worldwide initiative covering all scholarly disciplines.” The basic recommendation is this:

“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist\’s contributions, or in hiring, promotion, or funding decisions.”

Yes! Thank you. Though it\’s not the first time an organization raised the misuse of formulas in assessing research, it\’s still very welcome.

The Declaration goes on to offer advice for funding agencies, institutions, publishers, the people who cook up these kinds of metrics, and researchers on how to make the assessment of research value in ways that have more integrity. I like this part especially: “the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.”

I also like this advice about how to switch our focus of attention from faulty metrics to what it is we are actually assessing: “consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.”

There are some intriguing hints at how we could build better ways to see the impact of a particular publication. One recommendation, “remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication” would enable some useful ways of aggregating and mining data.

I also like the final two points addressed to researchers:

“Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs.”

“Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.”

We have put far too much emphasis on proof of productivity using blunt-force numbers. Number of publications and impact factors of journals, like robo-graders, are easier to administer at scale than actually, you know, assess stuff honestly. So long as we can measure how many papers a person has generated, no need to read them, and with a special sauce that gives us a prestige factor, no need to make any distinctions at all. Just run the numbers.

We just had our last class period in a course I teach on finding and using information. Students reported on interviews they did with researchers. All of their subjects spoke with passion about their research. All of them talked about the importance of integrity and gave many examples of situations in which they protected privacy of subjects, avoided bias, or went to lengths to ensure that they were not misrepresenting their findings.

Yet when it comes to evaluating the work of scholars, we’re okay with sloppily misapplying a bogus formula? How ironic is that?

I signed the declaration. You might want to, too.

Tags: