The perennial discussion arose on the Twitts yesterday, probably sparked by renewed ranting from dear old @mbeisen.
For the uninitiated, a brief review of the components that go into the pre-publication approval of a scientific manuscript. In general, authors select a journal and submit a manuscript that they feel is ready for publication*. At this point an Editor (usually there is one Editor in Chief and a handful or so of sub-Editors usually referred to as Associate Editors; AEs) gives it a look and decides whether to 1) accept it immediately, 2) reject it immediately or 3) send it to peer scientists for review. The first option is exceptionally rare. The second option depends on both the obvious "fit" of the manuscript for the journal in terms of topic (e.g., "Mathematical Proof of Newton's Fifth and Sixth Laws" submitted to "Journal of Trangender Sociology") and some more-rarified considerations not immediately obvious to the home viewer.
The third option is the real start of the pre-publication review process as we are discussing it. The AE (or sometimes the EIC) selects potential peer reviewers and requests their services. These days 2-3 reviewers are the most common target. Once the AE receives the reviews back from the initial round of review she or he then makes another set of decisions including 1) send it out to more reviewers, 2) reject it, 3) accept it for publication as-is or 4) send it back to the authors with requests for resubmission of a new version with either a) minor or b) major revisions.
The first option can occur because there is a point of disagreement amongst the initial 2-3 reviewers and additional input or tie-breaking is sought. It can also be because the selected reviewers aren't really addressing a key point in the manuscript. The third option hardly ever happens but it is possible. Rejection is an important outcome for this discussion, of course, because in general the process then repeats at a different journal. For the most part, the results of this initial review are not passed along** so it really is a re-start of the process.
The fourth option, requests for revisions is a key part. Sometimes the requests are for relatively minor re-writing of the manuscript. Maybe a different analysis or presentation of the data but mostly just re-writing the text. Other times, the requests are for additional experiments. This is a sticky one and goes back up to my initial observation that the authors felt the study was already sufficiently broad/large/comprehensive to warrant*** publication. Often enough, these calls for more studies constitute a very expensive and time-consuming set of experiments that may even be broader than the originally submitted data!
The AE plays a key role at this point. S/he can pass along these demands with tacit or implicit endorsement, add even more requests for data or intervene explicitly and tell the authors which of the reviewers' comments to ignore.
The authors, for the most part, receive the opportunity to revise with gratitude, react to the reviewers demands with extreme anger and then get down to the task of revising. Eventually they resubmit the manuscript and the review process is re-engaged. If the original decision by the AE was for "minor" revisions, s/he may not even send it out for re-review but may make an accept/reject decision right away.
Okay, back to the point. There are those that claim that this process is broken, hinders science and needs to be summarily beheaded and replaced. Replaced, they say, by immediate publication of manuscripts the authors deem to be ready. Followed, the theme continues, by post-publication peer review that will somehow**** fulfill the quality control functions that we think pre-publication review (purportedly) serves at present.
I think it does serve this purpose, generally does a good job and any of the drawbacks are relatively minor annoyances. I also think that the vast majority of the complaints that are levied against pre-publication peer review are directed in fact against particular practices and instantiations of peer review. They are not properly directed against peer-review.
One major category is that the demands of reviewers for more and more and more experiments is a problem. I agree. Entirely. For the most part I feel that authors do submit a credible paper's worth of data, excessive reviewer demands often amount to one or two Aims that I include in a (funded) grant application and it is really easy for a reviewer to hold authors to a standard they do not hit themselves. But here's the thing. I have found it frequently to be the case that AEs slap down excessive reviewer demands in their decision letter (just got one in the past few months as it happens) or accept our rebuttal that there is no way we are going to do all that. From the other side, it is not infrequently the case that when I have been the reviewer suggesting that a few other experiments are necessary, I later find the article published without any such additions. So what is the difference?
I submit to you that the real problem here lies with Editors that take a stance of encouraging the demands for additional experiments as a matter of expectation. This is, I assert, more heavily associated with the professional class of Editor. These are people who have as their sole job, the task of Editing for a journal. These tend to be the journals of high Impact Factor and reputation, aka, the Glamour Magazines. In contrast, the pedestrian real journals of science are more likely to have AEs and EICs that are primarily actively working, professorial rank, scientists performing the Editing duty as a service to the field. I tend to interact with these types of Editors more and I find them to be very reasonable when it comes to the requests for additional experiments. The problem is not peer review but the way it is put into practice by some Editors. The fact that entire swaths of Editors across multiple journals in my fields of closest interest are entirely reasonable proves the point. It is not the essential nature of pre-pub peer review to demand excessive additional work from authors who have submitted a what they think is a credible, publishable manuscript.
The second major objection that burbles to the top of these discussions is the perceived arbitrary nature of the AE's accept/reject decision making. Most frequently this is tied to the authors' expectation that a particular, usually high-ranking, journal MUST accept their paper or the whole system is clearly broken. This is ludicrous. This is the Glamour game of extreme competition for the artificially limited commodity at work. It is far less common down the feeding chain of Journal status and, as well shall see, has a simple solution. Again, this has absolutely nothing to do with the peer review process itself.
The third major objection is time. Obviously once a set of authors has completed what they think is a publishable manuscript they would like it published as soon as possible. For many reasons including helping to advance science and helping to advance their careers. Perfectly understandable. Thing is, this time factor is tremendously variable across the various instantiations of peer review. Delays exist at 1) initial referral to an AE, 2) AE initial decision, 3) Securing peer review reports/critiques, 4) AE post-review decision stages. All of these vary and change over time. For the approximately same set of journals that I review for, submit to and read articles in over the past few decades, review time has shortened. Time to initial publication (if we count appearance in some format on the pre-publication list on the journal site and in the Medline entry) has shortened. Some journals appear to be competing on speed. I had one in recent months that was rippingly fast from initial submission to online (6 wks) to assignment in a print-version. It was maybe 8 weeks all told. Within my sphere of experience 2-3 months from initial submission to first decision is very common. Making only minor revisions is probably my expected value- vast majority of cases. So resubmission delay is on us. Second decisions are usually within 2-3 weeks after a revised version has been resubmitted.
Delays are put into the process, assuredly, by the need for extensive revisions, but we addressed this above. It is not an inherent problem of peer review.
Delays are further introduced by rejection and the need to start over at a new journal. This bears some discussion. There are vertical and lateral differences in Journal prestige that most authors have to navigate. Some of the relative rankings are nearly-universally agreed upon. Other rankings are hotly contested or depend on subfield considerations or depend on personal biases. Any given paper may only be appropriate for a subset of journals within the greater sphere of, e.g. Biology or subheadings of Neuroscience or Physiology which further complicates the matter.
For the most part authors seek to publish their work in a "higher" journal. The choice of where to start the submission process is a tricky one. If authors just start at the top every time, get rejected, go to the next tier, get rejected, .....etc, then they are themselves wasting time. If authors shoot for a lower-profile journal that is almost guaranteed to take their paper then they are potentially leaving prestige points on the table (which may be irrelevant to them, depending on situation). Rejection at one journal ranking can result in authors 1) submitting down-market, 2) submitting up-market (suppose they did actually complete extensive new studies in response to review and still ended up rejected- this may motivate them to submit it afresh at an even higher journal than originally selected), and 3) submitting to an approximately laterally equivalent journal. There do tend to be fewer options for 3 and more options for 1, I will note. But 3 is why passing along the reviews from an initial, rejection experience is not very popular. Again, the authors tend to believe their paper was good enough for the first journal and that rejection was inappropriate. So why would they go down in prestige instead of lateral for their next attempt? They wouldn't. They submit to a similarly prestigious journal and hope for a more favorable set of reviewers or AE.
Eventually the authors may decide to try a less-prestigious venue. Often their papers then get accepted. At a delay, yes, but the work is published.
Does any of this mean that pre-publication peer review itself is broken? No. It means that the crediting system of academic science publication throws a kink at the desire to publish a completed manuscript as quickly as possible. This usually does not prevent publication*****, it merely slows it.
I'll end by throwing the critics a bone. Let us assume that there are a set of 2, 5, 10 or 20 journals that are essentially lateral peers at various points in the prestige rankings. If you get your manuscript rejected by one of them and accepted by another one at a given rank, does this not mean that "objectively" it deserved publication at the first place you tried. Well...yee-esss. And therefore does that not mean that the delays in re-submitting and re-reviewing and re-revising are an unjustifiable delay in the system. Yes. And is this not endemic to the greater peer review process you are describing, DM?
But this is really weaksauce as the justification for blowing up pre-publication peer-review and removing its benefits. We'll have to leave those benefits for another discussion, even though I thought it was going to be the primary point of today's post.
*I presume. I suppose there are people that actually expect to significantly change the manuscript from its initial submission and would actually be dismayed if it gets in.
**There are attempts to streamline the process by passing along prior reviews. The idea being that there is a hierarchy of Journals. The higher ones might say, in essence, "Great paper but not good enough for us. Would be just fine for a crappier journal down the totem pole". Their perspective would then be, "well of course the Editor of Crappy Journal will just take our leavings and be glad of it!". There are several blog posts in this topic along.
***There is no objective "end" to a scientific investigation. Period. The notion that any paper is a "complete story" is a lie. At best a convenient fiction that people tell themselves. So this is really just about different people drawing a line of satisfaction in different places.
****Really, don't look behind the curtain. The Great Oz-en will be angry.
*****There IS the whole subset of people who refuse to even attempt to publish "beneath themselves". There are also people who might refuse to attempt to publish a paper if it has been rendered less impressive by another paper being published. This is not the fault of peer review either.