Manuscript acceptance based on perceived capability of the laboratory

Dave asked:

I think about it primarily in the form of career stage representation, as always. I like to get reviewed by people who understand what it means to me to request multiple additional experiments, for example.

and I responded:

Are you implying that differential (perceived/assumed) capability of the laboratory to complete the additional experiments should affect paper review comments and/or acceptance at a particular journal?

I'm elevating this to a post because I think it deserves robust discussion.

I think that the assessment of whether a paper is 1) of good quality and 2) of sufficient impact/importance/pizzazz/interest/etc for the journal at hand should depend on what is in the manuscript. Acceptance should depend on the work presented, for the most part. Obviously this is were things get tricky because there is critical difference here:

This is the Justice Potter Stewart territory, of course. What is necessary to support and where lies the threshold for "I just wanna know this other stuff"? Some people have a hard time disentangling their desire to see a whole 'nother study* from their evaluation of the work at hand. I do recognize there can be legitimate disagreement around the margin but....c'mon. We know it when we see it**.

There is a further, more tactical problem with trying to determine what is or is not possible/easy/quick/cheap/reasonable/etc for one lab versus another lab. In short, your assumptions are inevitably going to be wrong. A lot. How do you know what financial pressures are on a given lab? How do you know, by extension, what career pressures are on various participants on that paper? Why do you, as an external peer reviewer, get to navigate those issues?

Again, what bearing does your assessment of the capability of the laboratory have on the data?

*As it happens, my lab just enjoyed a review of this nature in which the criticism was basically "I am not interested in your [several] assays, I want to see what [primary manipulation] does in my favorite assays" without any clear rationale for why our chosen approaches did not, in fact, support the main goal of the paper which was to assess the primary manipulation.

**One possible framework to consider. There are data on how many publications result from a typical NIH R01 or equivalent. The mean is somewhere around 6 papers. Interquartile range is something like 3-11. If we submit a manuscript and get a request to add an amount of work commensurate with an entire Specific Aim that I have proposed, this would appear to conflict with expectations for overall grant productivity.

26 responses so far

  • chemstructbio says:

    Your ** (grant productivity) is spot on. In my (limited) experimence, having 1-2 high impact “complete” (mechanism!!111!) papers does not give an advantage over having several/many smaller papers (each lacking 100% mechanistic detail)—in the eyes of the grant reviewers.

  • jmz4 says:

    I think a general standard is if you can apply your criticism ad infinitum, it's not a valid criticism against publication.

    E.g. I show mutant X has a phenotype, phenotype is because of gene Y's deficiency, which results in gene Z being misregulated. Gene Z supplementation rescues the phenotype of mutant X. Reviewer asks, how is Z doing what it is doing? Well the answer is basically to repeat the same series of experiments with a new mutant in gene Z, at which point you will come back to the same point again.

    To me, that makes it clearly a separable unit (a "story"), and I hate when reviewers harp on adding another level of mechanistic data to a story that stands alone just fine, and in reality, will probably just raise more questions.
    It also discourages elucidation of novel mechanism, because the lazy way to reply to these criticisms is just to try to pin your mode of action to an obvious, overstudied target, which, for some reason, seems to satisfy these criticisms adequately.

  • Emaderton3 says:

    I feel like as long as you meet the reviewers halfway and at least address concerns in the text for things you cannot do experimentally that it usually suffices. Although, you need to so some things experimentally if possible. For example, for the paper I published last month (IF>8), a reviewer suggested it would be interesting to look at protein X in my pathway. While interesting, it was tangential and would not have added to my story. So I politely declined but added a sentence with a reference saying how protein X may be of interest to what we found. For other comments, I did a bunch of new experiments which really supported the results and conclusions I already presented. After submission, the review that came back suggested minor revisions and gave me 2 weeks. However, the reviewer was asking for a set of experiments that required specialized equipment and would require weeks if not months of data analysis. They would have only supported a minor conclusion in the paper that was not the focus of the work. So, I talked to the editor who agreed with me and said I should just address it in the rebuttal. However, I did have some data I had not included that did speak to the reviewer's point, so I added it. At the end of the day, I would hope that a reviewer would suggest things to make a person's story more credible, but I would also hope that a reviewer and editor would be realistic in their expectations. However, I would imagine this may vary for different types of journals as expectations can be lower/higher.

  • drugmonkey says:

    saying we should meet them halfway implies what? That you send up a story in the first place that is woefully deficient?

    ....circling back to another discussion, does this mean you are vehemently against pre-prints because everyone sends up an initial offering that is clearly only half-baked?

  • Imager says:

    what about

    additional = you want to publish this in f-ing CNS so you better suffer and get working on it, mister.

    additional = I didn't get a CNS paper and I make you not getting one either - do these 75 experiments and create 3 transgenic mice to prove your point.

    additional = I am as an expert reviewer working on the same stuff and hate to be scooped so lets see how I can delay you with maximum impact... Here's what I need to see (another 3 years of work).

    Seems to me that these there are more often than additional = reason request to iron out a week point we hadn't noticed before.

  • aspiring riffraff says:

    Am I the only one that has had positive experiences with peer review? My most recent paper was greatly improved by the comments and suggestions of the reviewers, and everything they suggested was reasonable and doable in the time-frame given. I appreciated their efforts. Maybe has to do with selecting the right venue for the type of story I have?

    And I do always suggest male and female reviewers for my papers. There are plenty out there to choose from. And since I'm familiar with my field, I usually know if the Asian names correspond to men or women, or can google to check. And please people, keep recommending women for things. Turning down requests isn't all that hard. At the very least, you point out to editors repeatedly that there's some lady out there that knows something about the topic at hand.

  • Odyssey says:

    No, you're not the only one. Peer review has improved many of my papers.

  • Draino says:

    We published two good papers last year. The first two for my lab. Both had long, difficult reviews at mid-tier journals where the reviewers asked for much work and rewriting. We worked hard. We went way beyond meeting them halfway. Yet both papers got rejected because the reviewers could not be satisfied. I felt like we had done so much work, with the "help" of difficult reviewers, that we submitted to better journals where the work was ultimately accepted and even featured on the cover and written up in news-and-views type headlines. So, like a raunchy drill sergeant, the mid-tier reviewers made us work through boot camp to ultimately achieve publication in higher profile journals. It's a twisted system but it helped us overall.

  • Grumpy says:

    Physicist here: I have never been asked to do more experiments to prove a point. And I've never requested it either.

    Occasionally a critical point will be unconvincing, and I do state that and recommend rejection in its current form. But asking for specific experiments seems like Reviewer Activism to me, I reserve those requests for my trainees and collaborators.

    So question: is this a symptom of a culture of condescension and paternalism in jounral review in your field. Or are people just apathetic in mine?

    Or is it that in more complex systems such as yours there are just many ways to fuck up a critical point?

  • Dr Becca says:

    No, capability of the lab should have nothing to do with it. If the paper needs more experiments in order to justify the conclusions, then the authors either need to do the experiments or change their conclusions. If the paper needs more experiments in order to be flashier or more comprehensive because those are the standards of the journal, then the authors need to do the experiments or submit to a different journal.

  • drugmonkey says:

    And should a high flying lab get higher demands because it is perceived to be easy for them?

  • Dr Becca says:

    And should a high flying lab get higher demands because it is perceived to be easy for them?

    What? No.

  • drugmonkey says:

    But c'mon, they could knock it out in a month. Easy peasy.

  • MoBio says:

    @Dr. Becca:


    Seems like the topic fits under one of these:

  • Dave says:

    In my defense, I did admit that I hadn't really thought this through. On reflection, I was referring to experimental requests that I consider ridiculous, not essential experiments, but I understand that's also problematic as opinions on what is and isn't essential for a paper vary widely.

  • Dr Becca says:

    In my experience, any requests for extra experiments that are not justifiable (i.e. "it would be nice to see X", "but what about Y", etc.) can be successfully argued away with a thoughtful and thoroughly cited comment in your response letter.

  • Emaderton3 says:


    I would imagine that not every story is "perfect" particularly for young PIs like me that need to balance getting publications with the quality of work and "esteem" of the journal that it gets published in. What I meant simply was that I try to do my best by doing some experiments and then justifying not doing others with text and citations just as Dr Becca has mentioned in order to appease the reviewers. I know that doesn't necessarily answer your original question of should they expect more from a more seasoned lab, but I was just trying to provide another example to complement the one you gave.

  • JustAGrad says:

    I'm curious if others here see papers coming out of BSD labs that are unconvincing and that similar work would never be accepted from less important (SSD?) labs.

  • Newbie PI says:

    I submit a paper when it's about 85% complete and the remaining experiments are easy and likely to work. This gives the reviewers some very obvious things to ask for - things that we will usually have completed by the time the reviews come back. Using this strategy on the papers my lab has published so far has apparently distracted reviewers enough that they haven't asked for anything crazy. On the other hand, I recently saw reviews for a collaborative paper that I'm on with a BSD lab, and I was shocked at how harsh the reviews were. I really don't think they would have asked for so much if it was just me and my student alone on the paper.

  • Michael says:

    Tangentially related to this discussion - when reviewing a paper, I look at "present addresses" of the authors who did the actual work. Often times manuscripts bounce from one glam journal to the next, so by the time they get submitted to a solid journal, the first authors have already moved elsewhere, meaning asking for additional experiments (particularly if they involve sophisticated techniques) might put the lab in a tough spot. To be sure, if I believe there are critical experiments missing, I will raise this point regardless, but if there are experiments I would like to see (but feel there are not absolutely essential) and I know with some certainty that the lab can't pull them off in reasonable amount of time I might be inclined to not bring it up. In a nutshell, I admit that it's hard for me to review papers entirely based on their own merits, without taking into account some context.

  • L Kiswa says:

    When reviewing manuscripts, I try my best to ignore any knowledge I may have of the authors' capabilities. Agree with dr becca -- if the conclusions demand further data, I ask for more data, or the conclusions to be tempered, as appropriate.

  • Alfred Wallace says:

    "I'm curious if others here see papers coming out of BSD labs that are unconvincing and that similar work would never be accepted from less important (SSD?) labs."

    Absolutely. Even a lot of the more convincing BSD work would not go into glam if it came from a less important lab IMO.
    That's why it would be very entertaining to have a system in place where the editors are oblivious to the names of the authors (ofc the implementation of such as system would not work).

  • jmz4 says:

    ^No, but it would be interesting to deformat a bunch if PDFs and see if people were any good at asigning the papers to their impact score tier.

  • Dave says:

    EMBO has a double blind option now, but makes it clear that the authors are responsible for making their manuscript anonymous. Much, much easier said than done.

  • Drugmonkey says:

    jmz4-,the 60 pages of Supplemental Material wouldn't be a giveaway?

  • jmz4 says:

    @DM, no one reads that stuff anyway, right?

    But I do feel like the impact factor range for bloated supplementary sections is relatively broad(maybe IF 5-20 are indistinguishable) and getting worse.

Leave a Reply