Imagine that, having polished a dissertation for publication or finished a second or later book, the social-science scholar sends the typescript to an independent Review Institute. The institute determines a list of five to 10 scholars worldwide who are best placed to evaluate the work, taking into account both those experts cited in it and others who, though prominent in the field, may have a different take on the subject. For a fee like the one publishers now pay outside readers, each evaluator writes a two-page appraisal of the work, avoiding any summary and dealing only with its qualities. Numbers are also assigned on a uniform scale over a range of areas: Quality of the empirical base? How well written? Novel or familiar ground? Advanced or introductory readership? Balanced or polemical? And the like.
Means, medians, and various weightings indicate whether outlying ratings have skewed the results and identify those reviewers whose ratings are consistently high or low, possibly discounting them. Reviewers could be asked to review one another, too, flushing out bias. The written evaluations are signed, and the numbers assigned by each made public. The outcome would be a sophisticated version of Michelin or Parker, with much finer nuance, full transparency, and a reasonable ability to counteract idiosyncrasy and prejudice. (An alternative might instead use a more Zagat-like approach, throwing open the review process to a wider selection of critics.) The author would be allowed a reply, also published, and an appeals mechanism would be available to those who felt they had been seriously wronged. At the risk of some simplification, the various ratings might even be combined into a single number.
Now the fun begins. For, armed with this mechanism of evaluation, we could at one stroke eliminate three demons that afflict social-science and humanities publishing. First to disappear would be the publish-or-perish dilemma that grows ever more acute as presses spurn academic monographs and younger colleagues find it ever harder to be promoted on the basis of criteria still stubbornly wedded to bound books. Second, we could remove publishers from their surreptitious role as academic gatekeeper. And most important, we could once and for all eliminate physical publication as a bottleneck between scholarship and readership.
Over all, the issue is how to achieve open access for new research. For the article-driven fields, the discussion is already avidly under way, but few dare extend the principle to monographs. The web waits like a global petri dish, brimming with growth medium, but the spores are elsewhere. Why? Because tenure and promotion in the social sciences still require an increasingly pointless detour through paper and binding. Young scholars are effectively told: Drop your work into a black hole where it can be seen only by those who can afford the three-figure price of the average Routledge monograph or who enjoy lending privileges from a major research-university library. In our fields, publication is effectively privatization.
On the other hand, if everything is just published on the web, data will hopelessly inundate us. How can we both free and evaluate works of social science? The harder sciences, with their expanding e-journals, are coming to grips with the dilemma. Scholars in fields still loyal to the monograph must now also rise to the challenge.
However startling this may seem to social-science and humanities professors, it is sobering to consider that a Review Institute would at best catch us up to what the hard sciences have already accomplished, though mainly by informal practice and efficient networking.
Physics journals, for example, exist largely for the future historians of science. Physicists themselves have already read all they want of a given article in its preprint versions, circulated on the web (the best-known site is arXiv.org). Formal publication is an afterthought. Much the same holds true for economics.
Even the smaller of the softer fields are ahead of the rest. As a philosophy major in the 1970s, I read Saul Kripke in samizdat—blurry photocopies of lecture notes. Then, finally, Naming and Necessity came out, in 1980. I have a copy but have never looked at it.
Now, however, the web makes everything potentially available to everyone—as long as you don’t insist on publication in the old-fashioned sense. Paper-and-binding publication today actually keeps works out of readers’ hands. Anything can be up on the web, but only a small fraction between covers. That is the problem. Publication no longer serves to disseminate, since that is done better on the web. For the monograph-driven fields, publication has (except for some copy-editing) therefore been beaten back to its last function: a proxy evaluation of the work.
In a situation where a million texts clamor for attention, vetting, not dissemination, is the crux. As that role continues to be outsourced to publishers, inefficiencies arise. Of course, the academic world is roped back in through the back door, serving as readers for the presses. Each manuscript gets at least a couple. A typescript, which often makes its way among two or three publishers before coming to rest, may thus easily get a dozen readings, including those in-house. To that come—if the published book is lucky and not too obscure—another dozen reviews over the years in scholarly journals.
The quality of the press becomes a proxy for the value of the work. A university president once told me of how a history-department tenure case had been sustained because the book in question was finally accepted by a certain publisher. Another case hung on the much-debated question of whether OUP India is of the same quality as its sister presses. Our collective laziness keeps us from doing the work that the presses supposedly undertake instead.
The Review Institute’s ranking of a manuscript would spare us the intellectual inefficiency of multiple re-readings and re-evaluations, while still letting us enjoy the bounty of our harvest. Once a manuscript has received its evaluation, the presses can decide which they want to bring out on cellulose, bidding by auction for high scorers and performing their editorial magic. The same principle could easily be extended to articles, allowing a single vetting process, with each article published by journals ranked in their approximate scholarly pecking order, rather than today’s round robin of resubmissions.
But equally, and entirely independently, universities could now decide whom they want to hire and whom to promote without the slightest regard for what the publishers are up to. Publication and vetting will have parted ways. The new slogan for upward academic mobility would be “produce or perish.” That at least is not dependent on the vagaries of what the publishers think the book-buying public will absorb or how ruthlessly library budgets have been slashed.
But who will pay? The Review Institute promises huge savings. Each publisher interested to know how a work ranks among the experts would contribute. Given almost 300,000 new titles published annually in the United States, with an estimated half being scholarly works, assuming the minimum cost of evaluating a manuscript in readers’ fees and similar at $1,000 and a post-review rejection rate of 50 percent, that is not far shy of a third of a billion dollars. Considering the countless hours saved in hiring, promotion-and-tenure committees, and the like, universities too would be drawn to participate.
Americans spent $460-billion on higher education in 2009. Supporting a Review Institute would require a relative pittance from colleges. They already pay for post-publication data sifting that is used for evaluation and promotion procedures, so why not for an even more efficient pre-publication review? If Thomson Reuters, publishers of the Social Science Citation Index, can make money selling a service that measures precisely how many times and where your groundbreaking article on the dietary habits of peasants in the Haute-Vienne under Napoleon III was cited, surely academe will understand and invest in something like the Review Institute.
And that brings us to the best part of the proposal. Cutting the tie between publication and evaluation means that—thanks to the web’s practically limitless availability—every work, having passed through the Review Institute, can go straight to its intended audience, and anyone else in the world who is interested. Reader and work will be brought together in a way forbidden by our current wallowing on the web through the plethora of undigested and unevaluated material. The need to calibrate supply with demand will vanish once the publishers are taken out of the purely academic information loop. Like the Fed for the banking system, the web will in effect become the publisher of last resort, adjusting its take-up to ensure constant scholarly liquidity. Then supply will reflect only our productivity—no longer the artificial constraints of demand—and the problem will instead be sorting the good from the bad.
Publishing was yesterday’s problem, vetting is tomorrow’s.
Author Bio: Peter Baldwin is a professor of history at the University of California at Los Angeles.