Once again, it is not pre-publication peer review that is your problem

The perennial discussion arose on the Twitts yesterday, probably sparked by renewed ranting from dear old @mbeisen.

For the uninitiated, a brief review of the components that go into the pre-publication approval of a scientific manuscript. In general, authors select a journal and submit a manuscript that they feel is ready for publication*. At this point an Editor (usually there is one Editor in Chief and a handful or so of sub-Editors usually referred to as Associate Editors; AEs) gives it a look and decides whether to 1) accept it immediately, 2) reject it immediately or 3) send it to peer scientists for review. The first option is exceptionally rare. The second option depends on both the obvious "fit" of the manuscript for the journal in terms of topic (e.g., "Mathematical Proof of Newton's Fifth and Sixth Laws" submitted to "Journal of Trangender Sociology") and some more-rarified considerations not immediately obvious to the home viewer.

The third option is the real start of the pre-publication review process as we are discussing it. The AE (or sometimes the EIC) selects potential peer reviewers and requests their services. These days 2-3 reviewers are the most common target. Once the AE receives the reviews back from the initial round of review she or he then makes another set of decisions including 1) send it out to more reviewers, 2) reject it, 3) accept it for publication as-is or 4) send it back to the authors with requests for resubmission of a new version with either a) minor or b) major revisions.

The first option can occur because there is a point of disagreement amongst the initial 2-3 reviewers and additional input or tie-breaking is sought. It can also be because the selected reviewers aren't really addressing a key point in the manuscript. The third option hardly ever happens but it is possible. Rejection is an important outcome for this discussion, of course, because in general the process then repeats at a different journal. For the most part, the results of this initial review are not passed along** so it really is a re-start of the process.

The fourth option, requests for revisions is a key part. Sometimes the requests are for relatively minor re-writing of the manuscript. Maybe a different analysis or presentation of the data but mostly just re-writing the text. Other times, the requests are for additional experiments. This is a sticky one and goes back up to my initial observation that the authors felt the study was already sufficiently broad/large/comprehensive to warrant*** publication. Often enough, these calls for more studies constitute a very expensive and time-consuming set of experiments that may even be broader than the originally submitted data!

The AE plays a key role at this point. S/he can pass along these demands with tacit or implicit endorsement, add even more requests for data or intervene explicitly and tell the authors which of the reviewers' comments to ignore.

The authors, for the most part, receive the opportunity to revise with gratitude, react to the reviewers demands with extreme anger and then get down to the task of revising. Eventually they resubmit the manuscript and the review process is re-engaged. If the original decision by the AE was for "minor" revisions, s/he may not even send it out for re-review but may make an accept/reject decision right away.

Okay, back to the point. There are those that claim that this process is broken, hinders science and needs to be summarily beheaded and replaced. Replaced, they say, by immediate publication of manuscripts the authors deem to be ready. Followed, the theme continues, by post-publication peer review that will somehow**** fulfill the quality control functions that we think pre-publication review (purportedly) serves at present.

I think it does serve this purpose, generally does a good job and any of the drawbacks are relatively minor annoyances. I also think that the vast majority of the complaints that are levied against pre-publication peer review are directed in fact against particular practices and instantiations of peer review. They are not properly directed against peer-review.

One major category is that the demands of reviewers for more and more and more experiments is a problem. I agree. Entirely. For the most part I feel that authors do submit a credible paper's worth of data, excessive reviewer demands often amount to one or two Aims that I include in a (funded) grant application and it is really easy for a reviewer to hold authors to a standard they do not hit themselves. But here's the thing. I have found it frequently to be the case that AEs slap down excessive reviewer demands in their decision letter (just got one in the past few months as it happens) or accept our rebuttal that there is no way we are going to do all that. From the other side, it is not infrequently the case that when I have been the reviewer suggesting that a few other experiments are necessary, I later find the article published without any such additions. So what is the difference?

I submit to you that the real problem here lies with Editors that take a stance of encouraging the demands for additional experiments as a matter of expectation. This is, I assert, more heavily associated with the professional class of Editor. These are people who have as their sole job, the task of Editing for a journal. These tend to be the journals of high Impact Factor and reputation, aka, the Glamour Magazines. In contrast, the pedestrian real journals of science are more likely to have AEs and EICs that are primarily actively working, professorial rank, scientists performing the Editing duty as a service to the field. I tend to interact with these types of Editors more and I find them to be very reasonable when it comes to the requests for additional experiments. The problem is not peer review but the way it is put into practice by some Editors. The fact that entire swaths of Editors across multiple journals in my fields of closest interest are entirely reasonable proves the point. It is not the essential nature of pre-pub peer review to demand excessive additional work from authors who have submitted a what they think is a credible, publishable manuscript.

The second major objection that burbles to the top of these discussions is the perceived arbitrary nature of the AE's accept/reject decision making. Most frequently this is tied to the authors' expectation that a particular, usually high-ranking, journal MUST accept their paper or the whole system is clearly broken. This is ludicrous. This is the Glamour game of extreme competition for the artificially limited commodity at work. It is far less common down the feeding chain of Journal status and, as well shall see, has a simple solution. Again, this has absolutely nothing to do with the peer review process itself.

The third major objection is time. Obviously once a set of authors has completed what they think is a publishable manuscript they would like it published as soon as possible. For many reasons including helping to advance science and helping to advance their careers. Perfectly understandable. Thing is, this time factor is tremendously variable across the various instantiations of peer review. Delays exist at 1) initial referral to an AE, 2) AE initial decision, 3) Securing peer review reports/critiques, 4) AE post-review decision stages. All of these vary and change over time. For the approximately same set of journals that I review for, submit to and read articles in over the past few decades, review time has shortened. Time to initial publication (if we count appearance in some format on the pre-publication list on the journal site and in the Medline entry) has shortened. Some journals appear to be competing on speed. I had one in recent months that was rippingly fast from initial submission to online (6 wks) to assignment in a print-version. It was maybe 8 weeks all told. Within my sphere of experience 2-3 months from initial submission to first decision is very common. Making only minor revisions is probably my expected value- vast majority of cases. So resubmission delay is on us. Second decisions are usually within 2-3 weeks after a revised version has been resubmitted.

Delays are put into the process, assuredly, by the need for extensive revisions, but we addressed this above. It is not an inherent problem of peer review.

Delays are further introduced by rejection and the need to start over at a new journal. This bears some discussion. There are vertical and lateral differences in Journal prestige that most authors have to navigate. Some of the relative rankings are nearly-universally agreed upon. Other rankings are hotly contested or depend on subfield considerations or depend on personal biases. Any given paper may only be appropriate for a subset of journals within the greater sphere of, e.g. Biology or subheadings of Neuroscience or Physiology which further complicates the matter.

For the most part authors seek to publish their work in a "higher" journal. The choice of where to start the submission process is a tricky one. If authors just start at the top every time, get rejected, go to the next tier, get rejected, .....etc, then they are themselves wasting time. If authors shoot for a lower-profile journal that is almost guaranteed to take their paper then they are potentially leaving prestige points on the table (which may be irrelevant to them, depending on situation). Rejection at one journal ranking can result in authors 1) submitting down-market, 2) submitting up-market (suppose they did actually complete extensive new studies in response to review and still ended up rejected- this may motivate them to submit it afresh at an even higher journal than originally selected), and 3) submitting to an approximately laterally equivalent journal. There do tend to be fewer options for 3 and more options for 1, I will note. But 3 is why passing along the reviews from an initial, rejection experience is not very popular. Again, the authors tend to believe their paper was good enough for the first journal and that rejection was inappropriate. So why would they go down in prestige instead of lateral for their next attempt? They wouldn't. They submit to a similarly prestigious journal and hope for a more favorable set of reviewers or AE.

Eventually the authors may decide to try a less-prestigious venue. Often their papers then get accepted. At a delay, yes, but the work is published.

Does any of this mean that pre-publication peer review itself is broken? No. It means that the crediting system of academic science publication throws a kink at the desire to publish a completed manuscript as quickly as possible. This usually does not prevent publication*****, it merely slows it.

I'll end by throwing the critics a bone. Let us assume that there are a set of 2, 5, 10 or 20 journals that are essentially lateral peers at various points in the prestige rankings. If you get your manuscript rejected by one of them and accepted by another one at a given rank, does this not mean that "objectively" it deserved publication at the first place you tried. Well...yee-esss. And therefore does that not mean that the delays in re-submitting and re-reviewing and re-revising are an unjustifiable delay in the system. Yes. And is this not endemic to the greater peer review process you are describing, DM?

Perhaps.

But this is really weaksauce as the justification for blowing up pre-publication peer-review and removing its benefits. We'll have to leave those benefits for another discussion, even though I thought it was going to be the primary point of today's post.

__
*I presume. I suppose there are people that actually expect to significantly change the manuscript from its initial submission and would actually be dismayed if it gets in.

**There are attempts to streamline the process by passing along prior reviews. The idea being that there is a hierarchy of Journals. The higher ones might say, in essence, "Great paper but not good enough for us. Would be just fine for a crappier journal down the totem pole". Their perspective would then be, "well of course the Editor of Crappy Journal will just take our leavings and be glad of it!". There are several blog posts in this topic along.

***There is no objective "end" to a scientific investigation. Period. The notion that any paper is a "complete story" is a lie. At best a convenient fiction that people tell themselves. So this is really just about different people drawing a line of satisfaction in different places.

****Really, don't look behind the curtain. The Great Oz-en will be angry.

*****There IS the whole subset of people who refuse to even attempt to publish "beneath themselves". There are also people who might refuse to attempt to publish a paper if it has been rendered less impressive by another paper being published. This is not the fault of peer review either.

22 responses so far

  • One of the things I've learned being an AE is that the review process isn't just about accepting or rejecting a manuscript but actually improving it from submission to publication. It's really satisfying to see a mediocre manuscript turn into a good one by the authors including additional need data analyses and providing better introduction and methods sections.

    I'm not so worried that a world without pre-publication review would turn into a cesspool of manuscripts that are actually giving wrong information as much as I fear a world of poorly written papers with insufficient information as to how to replicate the results.

  • drugmonkey says:

    Yeah, like I said, the *merits* of pre-pub review were supposed to be the point of today and yet I got sidetracked...

  • Comradde PhysioProffe says:

    "Making only minor revisions is probably my expected value- vast majority of cases."

    That's because you always aim low at your favorite sub-dump journals. The most reasonable editors with regard to navigating reviewer requests for additional experiments have been at what you refer to as "glamour magazines", and the most rigid and unreasonable have been at a very well-known flagship society journal with only active academic scientist editors.

  • Science Grunt says:

    Why aren't the pro-post-publication-peer-review people not submiting their stuff at biorxiv (http://biorxiv.org/about-biorxiv)? It's fast, there isn't peer review (I think it's just a credential check if it's like arxiv), and a good number of GlamMags accepts pre-print published as full manuscripts.

    The setup for experimenting with post-publication-peer-review is already operational: isn't biorxiv + pubpeer all you need? If it it's because of GlamMag humping behavior, you can just deposit your paper at biorxiv, and get in the review pipeline. That way, there isn't delay in getting the data to the world at all, and all the time spent/wasted going down the journal hierarchy is only a voluntary cost for the researcher.

    Also, GlamMag humping doesn't take that long. When they reject immediately, you know within a week and when they send it for reviews, you find out how much extra work is needed within a month, based on my experience.

    I dislike the current system because it gives power to professional editors over active researchers in shaping the field, and it's why I like society journals waaaay more than C/N/S type publications. And I would love to see a bit of that rearrange. But I think the lead times in publication are very low in my rank of grievances with publication systems.

  • Chemstructbio says:

    the relationship between the AE and the reviewers is fascinating to me (as a young gun). I recently had a review where the reviewers asked for X,Y and Z experiments but the AE indicated I only had to discuss them. I did what the AE asked for, which resulted in a soliloquy from the reviewer why they were right ... And on ... And on ... BUT (insert some language that the AE likely contributed because this was a publishable unit) and BAM ... Manuscript accepted.

  • Bio Data Sci says:

    I agree that the preprint provides a happy medium. It gets your work out there for others to see sooner, but you're still forced to go through a review process before it's officially accepted by the community.

    I also think there would be value in having a 3rd party coordinate reviews so that you are not starting from scratch when you have to submit to a different journal. It would also prevent gaming of the system by authors.

  • Josh VW says:

    "If you get your manuscript rejected by one of them and accepted by another one at a given rank, does this not mean that "objectively" it deserved publication at the first place you tried. Well...yee-esss." - Isn't this confirmation bias? The second journal/reviews are as likely to have made a mistake in accepting it as the first journal was in rejecting it. (forgive me if this was implied by the "Yee-esss"...)

  • mytchondria says:

    I'm an Ass'c Editor for a BIG FUCKKEN DEAL journal you should be bedazzled by (inspite of our plummeting numbers). I can't remember the last time I asked for more experiments. I will ask for better pix often (because take better fuckken pix people), for folks to do the right fukken statistics or whatnot, but if I don't like what you've submitted, I'm going to straight up reject your arse. Done.

  • drugmonkey says:

    "Better pix"? Ayfk? The notion that data needs to be "prettier" for publication is the root of much fakery. I hope I misunderstand you.

  • Busy says:

    " The second journal/reviews are as likely to have made a mistake in accepting it as the first journal was in rejecting it."

    You are stating that the error is symmetric on what basis?

    As we all know the rates of false positives and false negatives can be arbitrarily different, depending on the specific test being used.

  • Idiot postdoc says:

    In my own experiences (likely limited relative to the others commenting here) the AEs at glamour mags expected, in the words from the last editor assigned to my paper, that "we not only do everything the reviewers ask for, but we also go beyond their expectations and provide even more compelling data."

    In the instances where I've argued against doing the experiments the reviewers requested, I mean, sorry, DEMANDED, it always ended in rejection by the AE, and a phone conversation to get them to reconsider the paper if we do perform those experiments that we just tried to argue against. It could be said though that our arguments for not doing the additional experiments were not strong enough, and I'd actually agree with this, but shit, when the experiments add another 6 months to the timeline, that hurts.

    The AEs, again only in my limited experience, literally acted as a vessel of communication between the reviewers and us authors, and did not provide any specific discretion or decisions about whether or not to do the additional work required, so perhaps they need to have a little more skin in the game, but given that on any particular day they may have 20-30 papers on their desk that they need to deal with (I don't know, probably more?), I just felt happy that we made it past their initial decision. They're smart people, so if they decided to send it out for review, then I'm happy that others found the paper at least as similarly interesting/exciting for their glamour mag as I did when I wrote it.

    I think the only thing I'd like to see to improve the whole process might be double-blind peer review (It's already been instituted at some journals, no?). I think this could really level the field, because let's be honest, if the paper is coming from a Nobel lab, the reviewers will be more then likely to gush all over its genius, when the same paper coming from a junior lab would be more likely to get shitted on.

  • Dave says:

    I can't remember the last time I saw a basic paper come back without a request for additional experiments, honestly. I have also seen enough rejections lately after authors have tried to argue against doing additional work, that I don't take any chances and just do the experiments if the journal is worth it. I have never had an editor explicitly say don't do this, or don't do that. Their comments are either very vague and open to interpretation, or they just request that you address everything. I wish editors would be more opaque in their communications sometimes.......

  • Dave says:

    ^meant less opaque, obviously.

  • My experience with Cell and the second tier down of Cell Press journals is that editors are very willing to have explicit discussions about what reviewer-suggested experiments will and won't be required, and to come to an explicit agreement before embarking on further experiments. This has included occasions when the editor has gone back to the reviewers to take their temperature, again before embarking on further experiments.

  • drugmonkey says:

    As an addendum, of *course* the way a PI places the demands for additional experiments on the "reasonable" dimension will vary. When you run a postdoc factory and are larded up with grants, another 12 mo of person hours is no big deal.

  • Kevin. says:

    There are some kinds of scientists who have been frustrated by the demands of reviewers for more experiments, they submit a semi-complete story and let the reviewers finish it. They see what the reviewer wants, do that, and done, everyone's satisfied. Unfortunately, these are the types of papers (and scientists) with incomplete, shallow stories--the cream off the top types, if you get my meaning. Not the riffraff.

    One approach I have been pleased by is doing experiments you think the reviewer could ask for while the paper is being reviewed. Then, when the review comes back asking for something else, you just argue away that and give them what you did. Then it's clear some work was done, even if it's not precisely what they wanted.

    But my experience is definitely that new experiments do not always need to results in new Figures, particularly if they were negative/uninformative and can be addressed as data not shown. Supplemental data seems like it's become a wasteland of reviewer-demanded bullshit.

  • Idiot postdoc says:

    @Kevin

    Planning ahead for what I will provide reviewers has worked very well for me as well, even if it's not what they ask for, throwing a carrot in there really makes the heads nod. I think I've come to conclude now that reviewers also evaluate the quantity of work, in addition to their obvious judgement of a paper's quality, and expect that a paper is not ready for publication until a minimum of 5-6 supplemental figures of BS have been filled. Wasteland indeed.

  • thorazine says:

    My experience at Cell and Cell Press has been the same as CPP's above - extremely reasonable editors, willing to tell me what they think is important among large lists of reviewer-demanded experiments. I've got less experience with the weeklies or their spawn so I have no idea what happens there.

  • Grumble says:

    Extremely reasonable editors at Cell Press journals? You have GOT to be kidding. I've had papers rejected by the editors without review for the most asinine of reasons. Admittedly, once I get past that and the paper actually goes out for review, the process hasn't been so bad. But being "reasonable" also means doing the difficult job of initial screening well, and IMHO none of the glams do it anything close to well.

  • Comradde PhysioProffe says:

    "I've had papers rejected by the editors without review for the most asinine of reasons."

    Try to pay attention. That's not what we're talking about.

  • Grumble says:

    Changing the subject does not imply an absence of attention.

    Glammy editors can be very good at filtering reviewers' demands for additional experiments (although most are not). That requires a different form of reason than filtering out the best papers to begin with, and I'd argue that none of them are any good at it. So the assertion that these editors are reasonable demands modification, even if it isn't exactly what Your Royal Highness has on your narrow agenda for today.

  • rxnm says:

    My experience with glam/professional editor journals is that they are much more willing to mediate between reviewers and authors. At society/AE journals I always get vague "make the reviewers happy" interactions with eds. Mind, the professional editors don't *offer* to do this, but it always seems to be on the table if you push. I've never had an editor who was a working scientist willing to ignore a noisy negative review, but I've had that twice with professional eds.

    Part of this is that glam editors are picking the papers they want in the journal from day 1 for whatever aesthetic / glam biz consideration they have in mind...if they send it for review they are on some level advocating for you. I don't think AEs really care one way or another. I know I don't.

    In my role as an AE or a reviewer, I just think everything competently done should be published, I don't care where. I think I have a Plos One attitude on this--publish anything that isn't obviously garbage, let people sort out what's good later. However, having seen some of the serious half-witted piles of shit submitted to Plos One, I think there needs to be a pre-pub screening level.

Leave a Reply