...or so says scientist Hidde Ploegh in a Nature News column.
The whole thing is brilliant and lays out the case admirably:
Rather than reviewing what is in front of them, referees often design and demand experiments for what would be better addressed in a follow-up paper. It is also commonplace for reviewers to suggest tests that, even if concluded successfully, do not materially affect conclusions....This has a serious and pernicious impact on the careers of young scientists, because it is not unusual for a year to pass before a paper is accepted into a high-profile journal. As a result, PhD degrees are delayed, postdocs may have to wait an entire year to compete for jobs and assistant professors can miss out on promotions...The extra months of experiments increase costs for labs, without any obvious advantage for science. Although journals profit handily when prospective authors offer the best science possible, most do not spend money to produce it.
The only glaring lack here is the failure to castigate this process for generating a LOT of data that may possibly be of use to science that will never see the light of day, once the extra person-years worth of work is shoehorned into a single additional figure.
The good Professor has three suggested solutions to help:
First, they should insist that reviewers provide a rough estimate of the anticipated extra cost (in real currency) and effort associated with experiments they request. This is not unlike what all researchers are typically asked to provide in grant applications.
This is a nonstarter to me. It just begs the question about how much additional effort and cost is too much and how much is just right. Who is doing to decide this part? Obviously the reviewers who are asking for the additional experiments have some idea already of the amount of time and expense...and they've made the demands anyway. Also, reviewers will shade their budget estimates in the direction of their demands "Oh, this should only take a month for a postdoc to work out". Yeah right. I'm not seeing where this will help.
Second, journals should get academic editors with expertise in the subject to take a hard look at whether the requests of reviewers will affect the authors' conclusions, and whether they can be implemented without undue delay.
This was my reaction to the column up to this point. The solution is good editing. I can't tell you how many times at real journals (i.e., those who are focused on the science, not the chase to be first, hawt and/or sensational) that the editor has shut down reviewer requests for additional experiments to be added to our manuscripts. Sometimes explicitly by saying in the editorial part of the review "The request for additional experiments X, Y or Z by Reviewer #3 are beyond the scope" or sometimes implicitly "I concur that points A, B and C in the reviews are most important to address". Either way, there is a clear signal that the Editor plans to accept the paper without the additional experiments. It should not surprise you that it is the practicing scientist class of Editor that I have found to engage in this behavior. They take an active role in policing the "level" of the journal not only in terms of keeping out the chaff but also in terms of keeping a lid on ever escalating demands for what is justified to publish as a paper.
There is another more cultural facet to the academic/professional editor discussion as well. I had a recent experience, as it happens, in which an editor eviscerated a manuscript we had submitted by demanding we delete a good half of the stuff in there. While it smarted to find that she did not recognize the unique brilliance of all of our offerings, I can accept this. This editor is a senior scientist in the field, has been an Editor in Chief of journals for a very long time and I am entirely uninsulted to find her inserting herself so emphatically in the process. I would have a considerably different reaction to the professional editor class which does not come with similar scientific stature (despite what they, hilariously, seem to think). So I'm with Professor Ploegh that this is a good change- academic editors (i.e., practicing scientist peers) over professional editors. You will be entirely unsurprised to learn that Nature editor Maxine Clarke is not impressed by this suggestion "I think your first and third suggestions, in particular, are good ones.".
Back to Ploegh:
Third, reviewers should give a simple yes or no vote on the manuscript under scrutiny, barring fatal shortcomings in logic or execution. Once editors have decided that, in principle, the results are of interest to their publication and its readership (which is their editorial prerogative), passing a simple test of logical rigour and quality of data should be enough to get them through peer review.
He's making a point consistent with his early observation that peer reviewers should stick to reviewing the manuscript that is in front of them, rather than reviewing the manuscript they think they'd like to see in the future. I concur. I think I've actually said this before in an exchange with some professional editor or other, probably Noah Gray of NPG, who tried to weasel out by insisting that editors are simply responding to the field (the peers doing the reviewing). As I noted above, academic editors have no problem telling reviewers who demand excessive numbers of additional experiments to go pound sand. There is no reason the professional editor class cannot do the same. Simply have a house rule that demands for additional experiments will be grounds for rejection of the manuscript. Not "revise and resubmit" trolling...rejection. "Try again later". With the clear understanding that the present paper has been rejected and any subsequent submission better be substantially different. Because after all, that's what the reviewer demands are saying, right? That it must be substantially different to be acceptable...
Professor Ploegh refers to a vicious cycle:
Many reviewers are also, of course, authors, who will receive such unreasonable demands in their turn, so why does the practice persist? Perhaps there is a sense of 'what goes around comes around', and scientists relish the chance to inflict their experiences on others.
So make use of this. Publish the papers that do not receive demands for additional experiments and give a hard rejection to those for which the reviews ask for lots of more stuff. Since GlamourScientists are the ones doing the reviewing, they'll snap in to line eventually.
Professor Ploegh ends with a comment that is going to warm the cockles of the hearts of the younger scientists:
Having read some of the biographies of the founders of molecular biology, it is hard to escape the impression that, once, the mechanics of science were indeed thus. It is worth revisiting the experiment, I should think.
The nasty way to put this is that, dude, you OldTymers just walked around picking up fruit off the ground, never mind picking the low-hanging stuff, and it was a freaking Nature or Science paper. We're up against some new astronomical standard in which a whole 5 year program of research is supposed to go into each GlamourPub.
The more sober realization is that science progressed just fine in the past when Science and Nature pubs with one or two figures in a "paper" of highly limited scope became foundational parts of our subfields. Certainly for those subfields of my own interest, when I go back to look at the original paper for something that became absolutely canonical it is a figure or two. A much more substantive paper always followed after the first observation but that was typically in a nonGlamour journal. A field journal. Now, of course, the followup papers are less frequently publishedby the original group*, and less frequently published at all**. That is a shame and a loss for science.
I can't believe the NIH is not concerned that their money is being wasted with this competitive cycle being played out with their extramurally funded investigators, aided and abetted by GlamourMags with a clear profit motive.
*because, of course, being GlamourLabs, the filling in of "details" is best left for "the little people***" and they are on to the next big splash.
**why would some other group pursue an area that the GlamourLab has the lead on, they are just going to get scooped to the next paper (see vicious recursion with *)
**Yes, that is very nearly a direct quote of the Glamour-est PI of my acquaintance.