Plan S is clear: science must be public and publicly funded research must be accessible by anyone. Like many colleagues, I am keen to see this happening.
How to make it happen is, however, a different story. In an effort to liberalise the market, Plan S asked the publishers to disclose the price for open access publication of an article. Once the market becomes transparent, it is assumed, competition and new business models should bring prices down.
So, recently, Springer Nature has candidly responded – just give us US$9,500 per open access article. “My brother bought a used car for $10K. What a fool, he could have been published open access on Nature” was probably the best response I saw putting things in perspective.
My problem is not even Springer’s selling price. As a for-profit company, their goal is to squeeze as much money out of people and organisations as they can.
My problem is that many of us are ready to buy it and may even consider it a fair price. OK, Springer journals are the iPhone of academia but for someone who runs an open access journal on a budget of less than $5K a year (a refurbished old phone, in comparison), that’s quite some money. The same can be said for academics from countries where $10K is their annual salary.
Unsurprisingly, Open Science is moving at two speeds with scholars based in Austria, UK the Netherlands, in Sweden and the like being much more likely to publish open access than the rest of the world.
All this at a time when scholars are encouraged to publish more. Ironically (or not), profit margins by commercial publishers are similar to those declared by Apple ranging around 30-40%. So, exactly as with iPhones, those who can afford it will say “Yes, it’s expensive. But we can’t wait to get more of that”.
The hidden variable
Are there alternatives? The Open Library of Humanities pools article production costs to reduce them and make them affordable through membership fees. This is economically viable but depends on who agrees to participate.
In the humanities, economic interests are lower but for medicine or life sciences, if the top journals in your field are not available, would you send your manuscript to a journal considered “good” knowing that it might not be taken into account towards your annual evaluation? Or knowing that someone might get the grant you applied for because they had papers in more prestigious journals?
In Estonia and Finland, for one thing, core funding to public universities depends on the number of “high quality” articles (and books) produced by that university over a given period. Here “high quality” merely refers to whether a journal is included in Scopus, Web of Science or ERIH PLUS.
In a similar fashion, some donors may pledge to ignore Impact Factor and look at the quality of the publications of the applicant. But how many reviewers would give a higher score to a candidate with publication in lower Impact Factor journals over someone publishing only in top journals?
If you are Krugman or Fukuyama, you can publish on a restaurant serviette and people will still read you. Until you get there, though, your value is defined (to some extent) by the journals you publish in. Early career researchers thus domesticate themselves into targeting the most prestigious journals.
But what is prestige if not the perception of a certain percentage of the people who will assume your article is good simply for having been published there? Prestige does not necessarily mean quality. On the contrary, in some cases, more prestigious journals have higher article retraction rates. But prestige is what counts in academia and it is distributed unevenly.
The cracked tank
If the monopoly of prestige stays with the usual journals, there is no way for emerging alternatives to compete. Not in the short term, at least. So, Plan S sounds to me like “boss, there’s a crack in the tank, shall I keep on pouring to keep water at a decent level?”.
No. In my view, you should replace the tank.
In New Zealand, the estimate spending on journals subscriptions for 2016 was between US$30 and $45 million. This means $3-4.5K each for the 10,000 academic staff employed in the country. In Canada, for the same year, the amount is $260 million or $5.5K per staff on a population of 45,666 scientists .
If all journals were managed by universities (university press model), a fraction of the money paid out would return to the university, thus reducing expenses for article access. A recent model proposed by the platform F1000Research is based on pre-review publication, transparent review process, relatively low costs for publishing and open access of all published articles. Things seem to be going in the right direction but the platform must still co-exist with the more prestigious journals and their exploitative strategy, who retain the monopoly on academic prestige. An even more radical step would be to completely ditch academic journals and decentralise everything so that:
- Authors identify reviewers and ask them for feedback.
- Peer review is open, names of reviewers function as endorsement (like “Dr X has reviewed the paper and, after the requested amendments, thinks it is of publishable quality”).
- Copy-edit and typeset using free templates.
- Publish the paper under creative common license and upload it into as many repositories as you want: your university one, the disciplinary one, a regional one.
Researchers are already too busy, I was told. I agree, but money to support them is there. No subscriptions means that $5.5K (in Canada) is saved per researcher, per year. Pool this for a whole department, say 20 faculty, and you have over $100K to hire a part-time assistant, copy editor and even (drums roll, please) to pay reviewers. Pool this for multi-authored papers and you have even more money available.
Bonus idea: limit the number of papers that can be submitted for career and department assessment to 1-3 per year (depending on the discipline) so that you need to submit the most representative piece of your work, not roll out the zillion papers you have produced by recycling your material with different sauces. The rest could come in the form of shorter pieces or research reports.
This would lead to scholars being assessed by committees that have the time to read their papers, rather than basing their evaluation on their metrics.
Sounds too far from reality? Well, most of these elements already exist, they are just never pulled together into a single system. In astronomy self archiving co-exists with official versions of articles. Some universities have already stepped out of quantitative evaluations, releasing the pressure on their staff.
Can we survive without metrics?
Getting rid of metrics is scary, sounds like anarchy will prevail and we will have no longer references. But aren’t academics, like blindfolded sommeliers, supposed to have the expertise to identify the authorities in their field without relying on metrics? I rarely hear anyone say “they are good because they have many publications”. It is rather “they are good and, besides, they have many publications”.
Metrics have been used to translate science results into “objective” figures that can be intelligible to non-academics but they are far from objective. Is there a way that you could consider a harasser with an h-index of 45 “better” than someone collaborative and helpful with an excellent ethics reputation but with an h-index of 10?
Disruptive change is hard and requires plenty of effort, negotiation, and thinking. Pouring money into the system is easier, especially for wealthier countries – and the scientists who have the influence and power to raise their voices against the current system to endorse viable, cheaper open access alternatives are mostly also those who can access money to pay for article processing charges (APC).
Author Bio: Abel Polese is a researcher, trainer, writer, manager and fundraiser.