In the 1800s, British colonists in India set about trying to reduce the cobra population, which was making life and trade very difficult in Delhi. They began to pay a bounty for dead cobras. The strategy very quickly resulted in the widespread breeding of cobras for cash.
This danger of unintended consequences is sometimes referred to as the “cobra effect”. It can also be well summed up by Goodhardt’s Law, named after British economist Charles Goodhart. He stated that, when a measure becomes a target, it ceases to be a good measure.
The cobra effect has taken root in the world of research. The “publish or perish” culture, which values publications and citations above all, has resulted in its own myriad of “cobra breeding programmes”. That includes the widespread practice of questionable research practices, like playing up the impact of research findings to make work more attractive to publishers.
It’s also led to the rise of paper mills, criminal organisations that sell academic authorship. A report on the subject describes paper mills as (the)
process by which manufactured manuscripts are submitted to a journal for a fee on behalf of researchers with the purpose of providing an easy publication for them, or to offer authorship for sale.
These fake papers have serious consequences for research and its impact on society. Not all fake papers are retracted. And even those that are often still make their way into systematic literature reviews which are, in turn, used to draw up policy guidelines, clinical guidelines, and funding agendas.
How paper mills work
Paper mills rely on the desperation of researchers — often young, often overworked, often on the peripheries of academia struggling to overcome the high obstacles to entry — to fuel their business model.
They are frighteningly successful. The website of one such company based in Latvia advertises the publication of more than 12,650 articles since its launch in 2012. In an analysis of just two journals jointly conducted by the Committee on Publications Ethics and the International Association of Scientific, Technical and Medical Publishers, more than half of the 3440 article submissions over a two-year period were found to be fake.
It is estimated that all journals, irrespective of discipline, experience a steeply rising number of fake paper submissions. Currently the rate is about 2%. That may sound small. But, given the large and growing amount of scholarly publications it means that a lot of fake papers are published. Each of these can seriously damage patients, society or nature when applied in practice.
The fight against fake papers
Many individuals and organisations are fighting back against paper mills.
The scientific community is lucky enough to have several “fake paper detectives” who volunteer their time to root out fake papers from the literature. Elizabeth Bik, for instance, is a Dutch microbiologist turned science integrity consultant. She dedicates much of her time to searching the biomedical literature for manipulated photographic images or plagiarised text. There are others doing this work, too.
Organisations such as PubPeer and Retraction Watch also play vital roles in flagging fake papers and pressuring publishers to retract them.
These and other initiatives, like the STM Integrity Hub and United2Act, in which publishers collaborate with other stakeholders, are trying to make a difference.
But this is a deeply ingrained problem. The use of generative artificial intelligence like ChatGPT will help the detectives – but will also likely result in more fake papers which are now more easy to produce and more difficult or even impossible to detect.
Stop paying for dead cobras
They key to changing this culture is a switch in researcher assessment.
Researchers must be acknowledged and rewarded for responsible research practices: a focus on transparency and accountability, high quality teaching, good supervision, and excellent peer review. This will extend the scope of activities that yield “career points” and shift the emphasis of assessment from quantity to quality.
Fortunately, several initiatives and strategies already exist to focus on a balanced set of performance indicators that matter. The San Francisco Declaration on Research Assessment, established in 2012, calls on the research community to recognise and reward various research outputs, beyond just publication. The Hong Kong Principles, formulated and endorsed at the 6th World Conference in Research Integrity in 2019, encourage research evaluations that incentivise responsible research practices while minimise perverse incentives that drive practices like purchasing authorship or falsifying data.
These issues, as well as others related to protecting the integrity of research and building trust in it, will also be discussed during the 8th World Conference on Research Integrity in Athens, Greece in June this year.
Openness
Practices under the umbrella of “Open Science” will be pivotal to making the research process more transparent and researchers more accountable. Open Science is the umbrella term for a movement consisting of initiatives to make scholarly research more transparent and equitable, ranging from open access publication to citizen science.
Open Methods, for example, involves the pre-registration of a study design’s essential features before its start. A registered report containing the introduction and methods section is submitted to a journal before data collection starts. It is subsequently accepted or rejected based on the relevance of the research, as well as the methodology’s strength.
The added benefit of a registered report is that reviewer feedback on the methodology can still change the study methods, as the data collection hasn’t started. Research can then begin without pressure to achieve positive results, removing the incentive to tweak or falsify data.
Peer review
Peer reviewers are an important line of defence against the publication of fatally flawed or fake papers. In this system, quality assurance of a paper is done on a completely voluntary and often anonymous basis by an expert in the relevant field or subject.
However, the person doing the review work receives no credit or reward. It’s crucial that this sort of “invisible” work in academia be recognised, celebrated and included among the criteria for promotion. This can contribute substantially to detecting questionable research practices (or worse) before publication.
It will incentivise good peer review, so fewer suspect articles pass through the process, and it will also open more paths to success in academia – thus breaking up the toxic publish-or-perish culture.
Author Bio: Lex Bouter is Professor of Methodology and Integrity at Vrije Universiteit Amsterdam