Peer review isn’t perfect − I know because I teach others how to do it and I’ve seen firsthand how it comes up short

Share:

When I teach research methods, a major focus is peer review. As a process, peer review evaluates academic papers for their quality, integrity and impact on a field, largely shaping what scientists accept as “knowledge.” By instinct, any academic follows up a new idea with the question, “Was that peer reviewed?”

Although I believe in the importance of peer review – and I help do peer reviews for several academic journals – I know how vulnerable the process can be. Not only have academics questioned peer review reliability for decades, but the retraction of more than 10,000 research papers in 2023 set a new record.

I had my first encounter with the flaws in the peer review process in 2015, during my first year as a Ph.D. student in educational psychology at a large land-grant university in the Pacific Northwest.

My adviser published some of the most widely cited studies in educational research. He served on several editorial boards. Some of the most recognized journals in learning science solicited his review of new studies. One day, I knocked on his office door. He answered without getting up from his chair, a printed manuscript splayed open on his lap, and waved me in.

“Good timing,” he said. “Do you have peer review experience?”

I had served on the editorial staff for literary journals and reviewed poetry and fiction submissions, but I doubted much of that transferred to scientific peer review.

“Fantastic.” He smiled in relief. “This will be real-world learning.” He handed me the manuscript from his lap and told me to have my written review back to him in a week.

I was too embarrassed to ask how one actually does peer review, so I offered an impromptu plan based on my prior experience: “I’ll make editing comments in the margins and then write a summary about the overall quality?”

His smile faded, either because of disappointment or distraction. He began responding to an email.

“Make sure the methods are sound. The results make sense. Don’t worry about the editing.”

Ultimately, I fumbled my way through, saving my adviser time on one less review he had to conduct. Afterward, I did receive good feedback and eventually became a confident peer reviewer. But at the time, I certainly was not a “peer.” I was too new in my field to evaluate methods and results, and I had not yet been exposed to enough studies to identify a surprising observation or to recognize the quality I was supposed to control. Manipulated data or subpar methods could easily have gone undetected.

Effects of bias

Knowledge is not self-evident. A survey can be designed with a problematic amount of bias, even if unintentional.

Observing a phenomenon in one context, such as an intervention helping white middle-class children learn to read, may not necessarily yield insights for how to best teach reading to children in other demographics. Debates over “the science of reading” in general have lasted decades, with researchers arguing over constantly changing “recommendations,” such as whether to teach phonics or the use of context cues.

A correlation – a student who bullies other students and plays violent video games – may not be causation. We do not know if the student became a bully because of playing violent video games. Only experts within a field would be able to notice such differences, and even then, experts do not always agree on what they notice.

As individuals, we can very often be limited by our own experiences. Let’s say in my life I only see white swans. I might form the knowledge that only white swans exist. Maybe I write a manuscript about my lifetime of observations, concluding that all swans are white. I submit that manuscript to a journal, and a “peer,” someone who also has observed a lot of swans, says, “Wait a minute, I’ve seen black swans.” That peer would communicate back to me their observations so that I can refine my knowledge.

The peer plays a pivotal role evaluating observations, with the overall goal of advancing knowledge. For example, if the above scenario were reversed, and peer reviewers who all believed that all swans were white came across the first study observing a black swan, the study would receive a lot of attention as researchers scrambled to replicate that observation. So why was a first-year graduate student getting to stand in for an expert? Why would my review count the same as a veteran’s review? One answer: The process relies almost entirely on unpaid labor.

Despite the fact that peers are professionals, peer review is not a profession.

As a result, the same overworked scholars often receive the bulk of the peer review requests. Besides the labor inequity, a small pool of experts can lead to a narrowed process of what is publishable or what counts as knowledge, directly threatening diversity of perspectives and scholars.

Without a large enough reviewer pool, the process can easily fall victim to politics, arising from a small community recognizing each other’s work and compromising conflicts of interest. Many of the issues with peer review can be addressed by professionalizing the field, either through official recognition or compensation.

Value despite challenges

Despite these challenges, I still tell my students that peer review offers the best method for evaluating studies and advancing knowledge. Consider the statistical phenomenon suggesting that groups of people are more likely to arrive at “right answers” than individuals.

In his book “The Wisdom of Crowds,” author James Surowiecki tells the story of a county fair in 1906, where fairgoers guessed the weight of an ox. Sir Francis Galton averaged the 787 guesses and arrived at 1,197 pounds. The ox weighed 1,198 pounds.

When it comes to science and the reproduction of ideas, the wisdom of the many can account for individual outliers. Fortunately, and ironically, this is how science discredited Galton’s take on eugenics, which has overshadowed his contributions to science.

As a process, peer review theoretically works. The question is whether the peer will get the support needed to effectively conduct the review.

Author Bio: JT Torres is Director of the Center for Teaching and Learning at Quinnipiac University

Tags: