Following our last blog post by Nicolas Gisin on the dangers arXiv publishing procedures, Reinhard Werner takes the discussion on science publishing practice several steps further.
In a witty and insightful reflection on the machinery of current science journals, the forces that power them and the instruments that serve them, he throws the dangers of relying on bibliometrics as a decision-making tool into stark relief.
Is watering down one’s work to make it more “accessible”, or truncating a complex theoretical process beyond recognition to meet PRL’s four-page limit (resulting in examples of what Reinhard calls the four-page pest) truly what makes a good scientist? Posing a number of pertinent and provocative questions, he establishes the need for pest control measures.
Reinhard Werner, perhaps best known for his “Werner states”, is highly respected in both the mathematical physics and in the quantum information community. Having delivered groundbreaking results in the theory of entanglement and in quantum nonlocality, he is very interested, as he himself puts it, in “anything in which the structure of quantum mechanics plays a non-trivial role”. He has published widely and broadly, in PRL and in more specialized journals whose role he strongly endorses.
Let’s continue the discussion.
Why we should not think of PRL and Nature as THE top journals in physics
By Reinhard F. Werner
Let me hasten to say that Physical Review Letters (PRL) and Nature are certainly fine journals, not least because many of us send our best work there. This self-amplifying process has always been characteristic of good journals. What I am talking about here, however, is a concentration process, which goes far beyond this, and is connected to the notion of a High Impact journal. This refers, of course, to the Journal Impact Factor (JIF), which is defined as the average number of citations per article in a two year window. It is also a product by Thomson Reuters, who sell what they claim is the “official” count, as an annual list with JIFs given to three digit precision. This fake accuracy claim must be ridiculous to every scientist, but practically all journals cite their JIFs (with three decimal places) on their web sites, and thus make it part of their advertising.
PRL and Nature score highly on this scale, and I believe this is partly due to their basic journal models. For Nature this is the idea of an all-sciences journal based on original articles. While the interdisciplinary character will hardly make researchers read and appreciate papers from other disciplines, it helps the standing of Nature as a physics journal: Even if no physics paper in Nature were ever read or cited, the citation-happy community of life scientists alone would raise Nature’s JIF above that of all dedicated physics journals. Hence, following JIF logic, Nature should be considered as the “best” physics journal. The defining feature of PRL is speed. It grew out of the “Letters to the Editor” section of the Physical Review in 1958, and Goudsmit’s opening editorial announces that, in the interest of speed, compromises on the quality of typesetting and the suspension of external refereeing were necessary. Since then refereeing has been introduced, but speed is still a main concern. This is seen, for example, from the adherence to the four page length limit (not counting references to help the JIF). With the demise of hardcopy printing this is no longer necessary for the distribution process, and indeed arbitrary length electronic supplements are allowed. But these are not supposed to carry any essential scientific substance and referees do not necessarily look at them: This would be in the way of speedy review. How does speed help the citation count? Even long before the JIF craze authors needed to show that they were familiar with the current literature, for which a recent PRL citation would always be handy. But also the self-amplifying process is at work: Some referees will assess the importance of a field by the coverage in High Impact journals, like the PRL referee who, rather hilariously, found the subject of one of my papers unimportant, because there were not many PRLs about it.
Of course nothing is wrong with the two journal models I described, and I repeat that these are fine journals. The problem begins when the community looks at them as THE top journals, so good papers are adapted to these journal models just to get recognition. This is bad, because the defining features of these journals are intrinsically anti-correlated with scientific quality. For Nature this is “broad interest”. This is sometimes nice to have, and indeed it is good practice to try to explain cutting edge science in media such as Scientific American (like Nature, now majority owned by the Holtzbrinck group) or the New York Times. But to an original article the demand may be damaging. The adaptation typically begins with the title, which must be free of specialized terminology, and stated in simple terms. That is, you do not try to indicate in the title what kind of experiment you did, and describe in the abstract or outlook that this has potential relevance to an aspect of quantum computation, but instead directly use a title like “Experimental Quantum Computation”. This simplified and therefore hyped up style is mandatory also for the conclusions, and you would be well advised to stick to the mode of “happy science without ifs and buts” throughout the paper. For a paper in theoretical physics it is usually impossible to eliminate “technicalities” to the required degree, so this kind of “top quality” is basically denied to the whole subject.
In PRL the damaging effect is largely due to the page limit, leading to the so-called Four Page Pest. This refers to papers whose content might have a natural length of six or ten pages, but which have been compressed to four pages making them unintelligible. This would be no problem for the rare case of a little one-thought paper that naturally fits the format or, closer to the original plan of the journal, for breaking-news announcements which would be followed up by a full-length paper. But increasingly this no longer happens. Why bother with writing a long paper when you have already harvested the high prestige with a PRL? Why make your ideas and methods understandable to the competition? Better go on to something else. Or as James Thurber said: “Don’t get it right, just get it written”. Again the constraint is especially harsh on theoretical papers. Some people in that field have thought that one could use the electronic supplements to give the supporting arguments or proofs for claims made in the body of a letter. Indeed this would be a good paper format, and could make PRL attractive as a place to publish serious theory. But editorial policy seems to be against that, because letters are supposed to stand on their own, and referees typically don’t appreciate it[1]. This means that PRL style “top quality” is largely denied to fields like argument-rich theoretical or mathematical physics, where the pudding is in the proof.
One effect of the high prestige of PRL is that they are flooded with papers. This makes it hard to find competent referees, so “broad interest” becomes an excuse for trying the paper on random referees, leading to the characterization of PRL as the Physical Review Lottery. Of course, some learn how to play this lottery, like the colleague whom I heard boasting at a conference that he could turn anything into a PRL. I checked some of his papers afterwards and found that he was right. I do understand that the editorial office is doing their best, but with thousands of submissions per month, what can they do? It certainly means that bad reviews, i.e., those which only give an opinion on importance without evidence that the paper was read, let alone understood, are not routinely discarded. This acts as a strong force of reversion to the mean. Or as Nicolas Gisin once put it: “When two random referees agree on a paper, can it be really new?” Consequently, the trash level in PRL is lower than on arXiv, but not by an order of magnitude. This may change a bit with the new acceptance criteria adopted last year, which require a paper to open a new field of physics, or present a method of pivotal future importance, or perhaps the signed commitment of three votes from a certain committee of the Swedish Academy. Large teams of top experts find it hard to predict future scientific developments. So how good can referees possibly be at the crystal ball reading of future importance that PRL asks them to perform? Their answers will mostly assert current fashion, adding to the drive towards the mean. The new acceptance criteria will certainly increase the rejection rate, but in itself this does not mean better quality: It might just mean a lottery with worse odds. The new rules will probably reduce the trash level a bit, but mostly ensure that the remaining trash, and also the otherwise good papers, become more pretentious.
So where should we send our best results? Of course, if you have something that you can comfortably adapt to the journal profile, by all means send it to PRL or Nature. I will certainly continue to do this, if only for the benefit of my young co-authors. But don’t neglect writing full length scientific papers[2], containing all the technical detail needed for other researchers to actually build on your results, and also to detect possible flaws. This is the kind of “peer review” that really matters to science, much more so than the idea that the quality control for journals should be done by unpaid volunteers. In the old days a good rule for the choice of journal used to be to select one with good circulation, in which the particular debate you are contributing to has largely been conducted. That is, you should maximize the scrutiny by an audience that knows and hopefully appreciates what you are talking about. That rule is still true, but it must now be augmented by the demand to post everything on arXiv as soon as it is ready. There is no better circulation, and if you select and cross-list the right subject classes you get a good critical audience which is much better at recognizing quality than conferred by the vague statistical promise of “impact” implied by the JIF.
Obviously, journals are no longer needed for making your results publicly available. Publishing in this literal sense is what the arXiv does: instantaneously, worldwide, for free, and with a useful search interface. So-called “publishers” actually do the opposite[3]: They grab your copyrights for the sole purpose of restricting access and erecting pay walls. The only remaining function of a journal these days is to issue seals of approval. A good journal therefore is one whose approval carries weight. This is entirely defined by its editorial board and practices, often by a long-term managing editor. When I see on a publication list a paper from, say, the Journal of Statistical Physics (JSP), which was managed and shaped for decades by Joel Lebowitz, I am confident that this is a paper of some quality. The JIF is entirely irrelevant. But the specialization of JSP helps to build a profile not just in subject matter, but also in quality. Even with the good work done at the editorial office of PRL, they have little chance to achieve this. Indeed, when I see a PRL, I only know that someone was successful at the lottery, possibly on the ground of merit, but quite possibly not. Moreover, when I see a large rate of PRLs over full papers, I know that this person may be spending more time at playing the lottery and turning out short announcements than at doing science, fulfilling the promises announced, and working out a larger picture. If we want to hire a new professor it is important that this person is able to define his or her own research agenda in the future, and speak to a variety of sub-communities and levels from general audience to specialists. So variety in topics and also in journals is important. A single-topic applicant, even if currently highly cited and with all high JIF publications, is out. I recently saw an application in which the High Impact publications were listed separately from the rest. Whether the applicant was thereby showing what was important to him, or was expressing his expectations that the committee would consist of JIF-counting robots, it did not help his case.
So who is actually promoting the silly identification of journal quality with JIF? Apart from Thomson Reuters, the large journal companies (Holtzbrinck Group, Elsevier, Wiley and a few others) are especially fond of it. They use it to justify huge differences in subscription prices, which incidentally show what their profit margins over production costs really are. High JIF flagship journals also help to sell package deals, because supposedly you couldn’t possibly cancel one of those subscriptions. It is an extremely profitable business, where those doing the essential and highly qualified work, i.e., authors, academic editors, and referees are anyhow paid by the taxpayer[4]. The taxpayer also has to foot the absurdly inflated subscription bills[5]. As if this was not bad enough, journals found a new way to milk science budgets and authors, namely to sell the open access to some individual articles in a subscription-financed journal, of course without lowering subscription prices. This is the rip-off model of open access. In any case “publishers” do very little to recommend themselves as partners in the development of the new scientific information infrastructure utilizing the possibilities of the internet, which is bound to develop eventually [6].
Strong support for JIF-based assessments also comes from administrators, who are naturally fond of a criterion for hiring and firing, which they can apply without collecting any expert opinions, and without having to know anything about contents. Especially politicians from countries like China and Poland, who wish to show that they are playing in the premier science league, apparently put all the weight on this criterion. But it is too cheap to blame administrators here – they may well be acting in good faith. All too often they can see that the scientists themselves apply these criteria. After all, enough scientists seem to fancy the JIF as an “objective” criterion, as somehow more “scientific” than judgement by humans who understand content. I have mostly heard this from young and productive people working at a place where the big chiefs are some older guys who hardly publish internationally. But could one not make this valid complaint without resorting to dubious criteria?
The JIF is just the most idiotic tip of the iceberg called Bibliometry, the counting of citations for assessment purposes. Bibliometry is generally a bad idea, and fails miserably at identifying the best papers. It is a pseudoscience, even if Elsevier and Springer each devoted a journal to it. The latest “improvement” from this quarter is altmetrics, which seeks to collect activity from social networks and news coverage. Of course, it may be nice to know how many people are tweeting about you, but the aim is, once again, assessment. The new judges of scientific quality are those who spend too much time at clicking and tweeting, and the obvious way to come out top is to make a silly claim that enrages as many people as possible.
If we do not put up some resistance, we will find ourselves in a kind of science in which the quest for knowledge is replaced by the quest for PRLs and Nature papers, for citations, and for facebook likes. I think we should make a start by resisting JIF based decisions wherever we can, and looking beyond the high impact section in every publication list.
[1] I have had different referee reactions to proofs in the supplement. Most will ignore them (as they would anyhow mostly ignore anything labelled “proof”). Some demand making a full length paper by integrating letter and supplement (thus proposing a “downgrade” to PRA), and some demand that full proofs be given in the supplement (“since they apparently already exist”). I did not manage to get a clarification from the editors. So it seems editorial policy is also made by the referees at random, adding to the volatility of the process.
[2] To be fair, the PRL guidelines say under “Presentation” that “When appropriate, a Letter should be followed by a more extensive report in the Physical Review or elsewhere”. But the relative value of, say PRL over PRA, is made very clear to everyone by the downgrading process of referring good papers (which are maybe not of sufficiently “broad interest”) to PRA. I am waiting to see a standard button on the PRL website for each paper, pointing to the full version or else confirming that the authors had nothing more to say about the “new method of pivotal future importance” or the “new area of research”.
[3] In German, don’t call them “Herausgeber”, call them “Zurückhalter”.
[4] The two journals I have been talking about are actually unusual in that they employ professional in-house editors, so only 95% of the work is at the taxpayers expense.
[5] Many colleagues have made the point that this criticism of commercial publishers does not apply to The Physical review, which is run by the physics community, more precisely by the American Physical Society. This is partly true. Their pricing is a bit more moderate. But taking copyright and erecting paywalls is part of their business model as well, and physicists from the rest of the world may not be entirely happy with supporting the APS through the profits from the journal operation.
[6] See, for example, this recently launched journal, which does not ship manuscripts around, but is an overlay to arXiv. Their selection process is otherwise pretty standard. If you want to think about new ways, a good starting point is Michel Nielsen’s Blog or his book “Reinventing Discovery”.
Kommentare (7)