It is time to eliminate "major revisions" as a manuscript decision

Jun 22 2016 Published by under Peer Review, Science Publication

There should be only three categories of review outcome.

Accept, Reject and Minor Revisions.

Part of the Editorial decision making will have to be whether the experiments demanded by the reviewers are reasonable as "minor" or not. I suggest a lean towards accepting only the most minimal demands for additional experimentation as "minor revisions" and otherwise to choose to reject.

And no more of this back and forth with Editors about what additional work might make it acceptable for the journal as a new submission either.

We are handing over too much power to direct and control the science to other people. It rightfully belongs within your lab and within your circle of key peers.

If J Neuro could take a stand against Supplemental Materials, they and other journals can take a stand on this.

I estimate that the greatest advantage will be the sharp decline in reviewers demanding extra work just because they can.

The second advantage will be with Editors themselves having to select from what is submitted to them, instead of trying to create new papers by holding acceptances at bay until the authors throw down another year of person-work.

55 responses so far

  • odyssey says:

    Endorse.

  • Hal says:

    Reject - there is a fatal flaw in connection between the hypothesis experimental design that cannot be patched with one or ten more experiments.
    Major Revision - more experiments / simulations are needed to make a sufficient connection - up to the lab whether they want to do them or not.
    Minor Revision - changes in wording, citations, or details of analysis and presentation.
    Accept - very rare perfect paper. Only happens when you are writing an invited article for your buddy's special issue.

  • David says:

    I agree with Hal. Re: major revision, you can like where a paper is going, but sometimes the paper doesn't get there. The authors can complete it and publish here or they can move on. As a reviewer, I don't care which option is chosen, but I also don't think it's my call to make.

    Two recent one's for me (as an AE):
    Reject - did you read the journal scope, because your paper is out of it
    Accept - love the paper, please fix your three typos. [non-invited, non-special issue, super lucky for the journal]

  • SM says:

    There are a couple of journals in my field that have moved to the Reject/Minor/Accept options. It has changed nothing for authors and only helps the journals. Now almost all papers are "Rejected with opportunity for resubmission", which really means Major Revision. When you resubmit, the manuscript becomes a "New" submission, but still runs through the system as a Major Revision. What this does is allows the journal to say 1) it is more selective because they "reject" more papers, 2) that they receive more new submissions than they really do and 3) that their time from submission to acceptance is shorter. Nothing will change until reviewers change their expectations. #Sameoldcrap

  • Dave says:

    ^Oui

    I just got a 'reject with resubmission as new paper' decision from a solid biochem journal. There were requests for more experiments that were not unreasonable, but expensive (very) and time consuming (obviously). The editorial decision email was a bit weird as they strongly encouraged us to pursue the resubmission approach. This leads me to believe that this is just a language thing and it essentially is an old fashioned 'major revision'.

    We're doing the fucking experiments (we always do!) and see what happens. As usual, the experiments, if they work well, will substantially improve the paper such that another journal might be realistic. But then you have to start over and run the risk of more delays. Can't afford that with grants going in.

    I've seen that this new approach is a way for journals to game the system in terms of time from 'initial' submission to acceptance metrics. That's bollox.

  • qaz says:

    Have you never had a reviewer help you make a better paper? Either you are an amazing, brilliant scientist or the reviewers in your field suck.

    If you don't want to do the major revision, send the paper elsewhere. You don't have to do the revision if you don't want to. Why would you remove the chance to revise the paper for those of us who still believe in the process of peer review? Go ahead and take all Major revisions as Rejects if you want. But I will continue to be grateful for reviews that improve my work.

  • Selerax says:

    Unless you mean "Reject with prejudice" (as in, "don't bother resubmitting"), then your "Reject" just means "Revise and Resubmit" - or in other words, "Major Revision".

    If you do mean "Reject with prejudice", that might be a bit extreme.

  • Michael says:

    what qaz said. Since you mention J Neuro, I'll add that most of my submissions there required major revisions (or were soft rejections, with the possibility to resubmit with more data). In all cases, the end result was a better paper and the cost/time getting there was probably less than resubmitting it elsewhere and starting from square one.

  • qaz says:

    @selerax - It is also important to have a "real reject" to communicate to authors that there is a fundamental flaw in the paper which you (as a reviewer) do not think is fixable.

    I think that the four levels work very well (Accept =typos only; minor revision = stuff to fix, but editor is capable of judging if they've responded well enough; major revision=stuff to fix and I (as a reviewer) cannot judge corectness of claims without seeing their responses; reject=should not be published (*)). * there are issues of does reject from one jnl imply should not be published at all vs just not good enough for this jnl,but that's a different issue from the one DM seems to be poking us with.

  • Ass(ociate) Prof says:

    Indeed. This is one way to go about it and would probably make a difference in the review process.

    My personal approach as a reviewer is to try very, very hard not to ask for additional experiments. If there is something straightforward that will really nail the point down, I might suggest it, but really the approach is to review the manuscript in hand and report to the editor about what I read. I recently received feedback at a meeting from one editor thanking me for my reviews and commenting on the quality of the reviews.

    I agree that sometimes additional experiments can improve a paper. Absolutely. My thought though is that doubling the work for a 5% gain in quality might not be the best use of funds. It's gotten to be a bit dispiriting as of late to read articles in my field where the data is solid, sufficient and novel in the first four figures, then an additional 15 experiments are done for figures 5-7 and S1-S62, and no further clarification or biological insight is gained.

  • Jonathan Badger says:

    @qaz.
    Exactly -- just like there ought to be a formal distinction between "retraction, because fraud" and "retraction, because we don't believe in our old results and don't want people to keep citing it as support for similar conclusions"

  • SidVic says:

    I agree!

    By the time I submit a paper I have thought about the work for extensively. The reviewer might have spent 2hrs with it tops, maybe with drink in hand. Honestly i have never been caught out on a mistake in experimental design ( although some reviewers have caught weaknesses of which I was already aware. I have never, not once, performed an additional experiment at the behest of a reviewer. Of course this has bite me a time or two but as the author i reserve the judgement for when the work is finished.

    Some reviewers seem to be under the impression that they haven't done a good job unless the suggest some more experiments.

  • dsks says:

    Hmmm... I think I'd rather keep the Major Revision option over the Reject and Resubmit thang. With the former you have a pretty high likelihood of publication if you do the requested legwork, because it's generally rare for the same reviewers to nail you after you've given them everything they asked for. With the latter, you are guaranteed further in depth review with a fair chance of having a new set of reviewers to boot. And you can bet your ar$e they'll ask for more experiments. It's practically a psychological certainty.

  • Dave says:

    That's true, but it depends on if they actually treat it as a new submission or just send it back to one or two of the original reviewers. At some point, it makes no sense for a journal to start from zero all over again.

  • Luminiferous Aether says:

    ^^ if the system runs a "new" submission as a major revision, at the very least they should send it out to at least one previous reviewer who asked for more experiments. Also this is where the AE should step in and use their discretion to say okay these authors addressed most major concerns from the prior review so the manuscript can be accepted for publication now.

  • jmz4 says:

    One problem is that "revision" implies an error or omission, when, in reality, most of the frustrating requests from reviewers for further experiments are simply about broadening the scope of the work, or extending the findings, or duplicating findings with a different technique. These may work the work more convincing or interesting, but they are usually not vital to the central message of the work.

  • drugmonkey says:

    I mean Reject as a real rejection and not a "major revisions but we want to reset the submission clock". Absolutely not the same reviewers.

    The big point is to break reviewers and AEs of the habit. If it comes back majorly revised any new reviewers should not be told it is a revision. And re-tread reviewers are to be avoided if at all possible. Once reviewers and AEs understand that demands for endless amounts of additional work is in fact a rejection, things will change. From all sides, I bet.

  • drugmonkey says:

    Have you never had a reviewer help you make a better paper?

    Yes, but never once because of the types of demands for major new experimentation of the type under discussion today. We have occasionally added a bit more of our ongoing story, especially if it is under way by the time the reviews come back.

    Can you explain what major stuff you so habitually leave out that makes a better paper and is not better suited for publishing another paper in the developing story instead?

  • Draino says:

    What about journals that allow major revisions but put a strict time limit on it? They say you have 90 days or it's a new submission. I think that's reasonable.

  • qaz says:

    DM - You are throwing the baby out with the bathwater. Yes, fine, the editor should exercise judgment about whether a reviewer is asking for too much. But to say that we shouldn't have "major revisions" really hurts the process of peer review. I would argue, for example, it should be a very valid criticism to say "you say X is true, but your data leave open that X is not true and rather (much more boring statement) Y could be true instead. Either remove the claim that X is true (*) or do the following additional work (**) to prove Y is not true." This is the most important part of peer review. To get rid of it is to remove the whole purpose of peer review.

    * people may not want to remove the statement that X is true because then they aren't cool enough for the GlamJournal they want to be in. That's their problem. If they can't defend X being true, then they shouldn't be allowed to say it.

    ** the additional work should be NECESSARY to defend X being true against alternative Y. But that's the editor's call. And if the author doesn't want to do it, they can always take their manuscript and go home (or to another journal).

    I have to say that my experience with peer review (from the author side) has been generally very positive - when they ask for more work, they are usually right, the work is usually worth doing, and when the work is too hard to do, it is often true that removing the extraneous claims makes for a better paper.

    The claim that reviewers shouldn't ask for extra work that is unnecessary is very valid (and that's the editor's job to determine it) but that doesn't invalidate the "major revision" decision. Major revision means "I want to see it again before I can judge whether you answered my concerns correctly." That is a perfectly valid statement from a peer reviewer.

  • jmz4 says:

    "and that's the editor's job to determine it"
    A responsibility, judging by the comments, that they have abdicated.
    I think requiring more structure to a review would be helpful. Remember, no one is really getting trained to write cogent, helpful reviews, so some more guidance would probably be useful. A not insignificant portion of my colleagues (postdocs) see reviewing as a bloodsport and an opportunity to impress their boss. It encourages relentless attacking and nitpicking, in ways that are often poorly thought out.

    I think the elife model helps assuage some of these concerns. One, the reviews are not blind to the other reviewers, which encourages people to leave their nastier impulses at the door and put some thought into their reviews. Two, the study section style discussion of the reviews probably filters out the more inane requests.

  • qaz says:

    "and that's the editors job to determine it" - I don't expect an editor to write an intro to the reviews that says "you need to do points 1, 2, and 4, but point 3 is too much work for you". (In large part because the editor doesn't know how much work point 3 is for you.) I do expect that an editor will be open to a phone call from the authors asking whether they need to do the experiment suggested in point 3 or just an answer of why the authors don't think point 3 is necessary would be enough. (Some journals and some editors will even allow an author to write a question that gets sent to the reviewer to ask "if we could show Z, would that accommodate your concern?". This would, of course, have to go through the editor to prevent reviewer de-blinding.) I also expect that reviewers will respond positively when the authors come back saying "major experiment suggested in point 3 is not necessary to preclude alternate possibility Y, because this minor additional analysis shows that our experiment already controlled for possibility Y", assuming, of course, that the minor additional analysis does correctly control for possibility Y. In the end, it is the editors job to adjudicate the authors' response. (They may call in an additional reviewer, but in the end, the editor is making the publish/not decision.)

    PS. If postdocs are not getting trained in reviewing and thus doing reviewing wrong, that's a problem with the training that those postdocs are getting, not with the process of peer-review!

  • odyssey says:

    I don't expect an editor to write an intro to the reviews that says "you need to do points 1, 2, and 4, but point 3 is too much work for you".

    I've done this and will continue to do so when warranted. It's not unreasonable to expect the handling editor to play an active role in the process. I view it as their job. Certainly you shouldn't write that kind of thing unless you understand what's being asked of the authors, but in my experience that's the less common occurrence.

    IMO it's the unwillingness of editorial board members to take an active role that has led to the rise in unreasonable reviewer demands.

  • Grumble says:

    DM:
    "Once authors understand that demands for endless amounts of additional work is in fact a rejection, things will change."

    Fixed that for ya.

  • jmz4 says:

    " If postdocs are not getting trained in reviewing and thus doing reviewing wrong, that's a problem with the training that those postdocs are getting, not with the process of peer-review!"
    It is "training" in the loosest sense: an opportunity to figure it out yourself. Which is why I'd argue that more structure from the journals would be helpful. Something where, to use your example, hypothesis Y has to be clearly identified. It should basically be a form that has a spot for "hole in argument" and "plausible experiments to fill it". I think a major source of spurious requests could be headed off by just making people justify them a bit more.

  • Laurent says:

    I often experience reviews labelled as "major revisions" which are actually minor revisions once you read the actual reviewers' comments. Seems like there's been a shift toward caustiously cautious framing of reviews by editors over the last decades (I also got a "minor revisions" which was merely rewording one sentence and fixing two typos in the whole manuscript).

    The idea to reduce the outcome of reviews to three cases has at least one interesting prospect for authors that I've not seen discussed here yet (but maybe I've read the thread too quickly): we can more easily make the decision to submit elsewhere more quickly, and it'll always be better than waiting for six other months before getting a rejection, because the "major" changes eventually didn't satisfy the reviewer (I assume, too often reviewer #2).
    Resubmissions are likely to be sent to the same reviewers, so changing the outlet also means getting a different perspective from other peers. Most often, it is actually good, especially since higher IF journals tend to make poor reviewer choices (my personal experience: the only times I actually got poor comments on statistics from reviewers that seemed to lack in the basics was for these journals --I'm not speaking glamour level outlets, only higher community journals, and I'm wondering if statistics are actually the real issue at stake).

    I'm obviously from a field different than here (ecology, we are almost never asked to add experiences, because we already have multiyear datasets). Most journals are basically equivalent to each other, save IF numbers beyond the dot. Sparing time to a final reject is definitely of interest, because the second round rarely produces comments that improve the manuscript, contrary to the first round.

  • David says:

    @ Laurent
    "major revisions which are actually minor revisions"

    Do journals need to do a better job of distinguishing these two? I recently saw a re-submission where the comments from my first review where nearly identical to another reviewer, but I called it minor and the other reviewer called it major. The AE picked major, although I'm not sure it made a difference in the end. The paper was quickly revised and resubmitted. Journals I've reviewed for seem to give a one sentence (or fragment) description of what major (and minor) revision means.

    @DM
    I'm curious why you think it is imperative to have new reviewers for a major revision submission. I would think that if you addressed the reviewers concerns, then you'd have a higher likely of acceptance with the original reviewers than new ones. Is that a bad assumption, or is it something else (new reviewers give new comments which improve the quality, or maybe the author isn't fully addressing the original concerns).

  • drugmonkey says:

    That's the whole point. To break the system of this stupid cycle of demanding lots of new work. Makes reviewers and editors focus on current manuscript, not pipe dream of manuscripts Future.

  • Laurent says:

    @David
    "Do journals need to do a better job of distinguishing these two?"

    Dunno. With a single category you wouldn't have to delineate with precision anymore. My own perception is that if fixing the text can be done within a workday or two, then it certainly should fit in the "minor revisions" category, but clearly editors seem to have another definition for it.

    As for likelihood of acceptance after revision, I don't know which way it goes. Sometimes first impressions cannot possibly be turned off even after working out reviewer's own prior concerns. In my experience renewing reviewers always turned out much better than keeping the same ones. Maybe I've been cursed for bad initial reviews though... 🙂

  • qaz says:

    @Laurent - historically, "minor revisions" meant that it did not go back to reviewers and the editor would judge whether the response was adequate, "major revisions" meant that it did go back to reviewers.

    As has been well established in both grant and paper reviewing processes, a good response to a review is going to give you a better chance at getting published/funded than being faced with ever-changing goal posts from different reviewers.

    I still fail to see the problem here. I guess DM doesn't like people telling him what to do and seems to want reviewers to simply give him an up or down vote, but I prefer a chance to respond to a reviewer's comments. Sometimes our response is "that's beyond the scope of our claims"; sometimes our response is "here's a different solution than the one you recommended"; sometimes our response is to drop the claims; sometimes our response is "here's this massive amount of work you correctly demanded"; and sometimes our response is "damnit, the reviewer is right and we can't get there from here." If the reviewer wants a lot of work, they need to justify it. In any case, it's all about actually writing a good response to the reviewers.

  • anon says:

    I think DM's suggestion would work, since in my field (observational astronomy) asking for new experiments, which would be new observations in our case, is unheard of (although I am only a postdoc and haven't written or reviewed tons, so take that with a grain of salt). New observations means re-competing for telescope time (which would be similar to submitting a new mini-grant) that you may not get... and then your sources may be only visible at certain times of year etc. Then there is the funding/time pressure to actually do the observations... Asking for different analyses of existing data is okay, but not asking for *completely new* data/observations (i.e. experiments). I could imagine this might differ for theorists who do computational "experiments."

    So when I've reviewed, the categories are basically: accept (typos only), revise and resubmit but I as reviewer don't need to see it again (minor technical problems with concrete easy suggestions on how to improve them that the editor can assess if they've been done or not, and/or suggestions that aren't strictly necessary but would improve things), revise and resubmit but I *do* need to see it again (the problems are significant enough, or the path to correcting things isn't immediately obvious, so you need someone with technical expertise to look at it again), reject because it doesn't fit the journal but the science is good, and reject because you think something is fundamentally flawed and it requires a complete re-write or re-analysis. Strict accept has never happened to me (reviewing or submitting), but I can imagine cases where there is a large collaboration and many stages of formal internal collaboration review that would make a manuscript pretty much perfect. The two revise and resubmit categories happen almost all the time, but as the author you have no idea which of the two sub-categories you're in (which is nice because that generally means the authors will treat all of your suggestions seriously). Full rejections for flaws rarely happens that I know of (but I only hang out with good scientists ;-P and the full rejections I have seen have been cases of reviewer prejudice), and full rejections for misplacement happens frequently for the glamour mags.

  • Draino says:

    Example of editors not doing their job:

    Major revisions requested. We do the major revisions and send back in 90 days. All reviewers satisfied except one who wants us to change the labels on panel 3 in Figure 4. No problem, takes about 30 minutes, we send back to the editor. The editor, instead of making an actual decision, decides it should go back to the reviewer to see if he likes the new labels. Reviewer takes 2 MONTHS to reply back saying it's fine. Why couldn't the editor have decided on her own, or just contact the reviewer directly instead of pumping it through the slow as molasses system? I think that journal employs a bunch of grandmas and secretaries instead of professional editors.

  • David says:

    @Draino,
    "I think that journal employs a bunch of grandmas and secretaries instead of professional editors."

    The journals I've been affiliated with (as an author, reviewer, and AE) do not have professional editors. The AE's and EiC are researchers who volunteer. The EiC may be paid, but as an AE, I don't even get a free copy of the journal. As a result, journal work is pushed down the priority list. I know it sucks for an author, but my day job comes before my volunteer work. I think most of us agree, it's not a good system.

  • MorganPhD says:

    We recently got a 235 day mouse time course experiment as a suggested experiment from a reviewer because "it would be interesting to see what would happen". The Editor said we should do it for the paper to be considered and, of course, we did because it's a SCN-type paper. The longest time point in the paper originally was 45 days...

  • JL says:

    @anon, observational astronomy is not the only sub field where there are major complications for doing more experiments. The difference is more that you have a nicer way of arguing that "it's not my fault". New experiments, even computational ones, often incur substantial costs, require people changing their plans and postponing moving with their girlfriend, etc.

    Adding experiments is a mess. I almost never ask for them as a reviewer. I review the paper in front of me. If the data does not support the claims, then take out the claims. If this dilutes the paper too much, then I recommend rejection and explain why. If the authors want to figure out the experiments needed to support their claim, that's their problem. If I have the time, and the authors didn't annoy me by trying to hide things and make bold unsupported claims, I will give them suggestions. My main goal is to help the editor decide if the paper is scientifically sound and the claims supported.

  • drugmonkey says:

    this is the type of utter ridiculousness my comments are addressing.

    [eta: this referred to the 235 day mouse expt]

  • JL says:

    DM, that's a comment you could put in every thread.

  • Grumpy says:

    I agree with DM here. Requesting specific experiments be done is condescending and beyond the role of a reviewer IMO. Also generally like the idea of having at least one new reviewer in round 2.

    Another interesting question is why this request for experiments thing seems, in my experience, highly concentrated in biomedical fields and is relatively rare in experimental physics. My guess is that it is related to slightly less IF-hunting in physics due to the wide use of preprint servers. Less generous interpretation would be that biomedical experimentalists are more likely to make unsupported claims...

  • I agree with DM--things have gotten out of hand with referee requests, and editors just pass them on without consideration. Now that it is in the reviewing culture, younger scientists think that reviewers are supposed to ask for more data, and that just perpetuates the problem and makes it worse. New reviewers wanting to be "thorough" ask for more and more, and things get worse. Shaking things up in a big way may be the only way to end it.

    I suspect, though, that the slow minor revisions category would become the current major revisions, since there isn't really much difference between the two anyway. The editors are the ones who need to make it clear that authors don't need to do another year of experiments to get their paper accepted if the request is not to fix a fatal flaw in the manuscript. And we definitely need to get rid of reject but resubmit. That is just a transparent ploy to keep publication times low while tuning the manuscript in a direction the editor likes.

    The job of the reviewer is to decide if the experiments were done competently, the data was analyzed properly, and the conclusions are supported by the data presented. If there are any issues with unsupported claims, it is for the authors to figure out how they want to fix it: more experiments, putting in data they didn't include, citing literature they thought was more widely known, or whatever. The reviewer is supposed to review the manuscript, not help direct the research.

    As I said in the discussion on Odyssey's blog, in my field, it used to be rare for reviewers to ask for additional experiments, and when experiments were suggested, they were usually additional control experiments. Now, I almost always get asked for more data, and many of the requests are for scope extending experiments often irrelevant to the manuscript as written. We need to get rid of that practice.

  • Jonathan Badger says:

    @draino
    "Why couldn't the editor have decided on her own, or just contact the reviewer directly instead of pumping it through the slow as molasses system? "

    Okay, for the first, but not going through proper channels for the second is definitely *not* okay -- not to mention that it wouldn't really speed anything anyway. The review process is slow because reviewers don't respond quickly. Which is something authors don't seem to get -- 99% when an author asks "what is the status of my manuscript?" the answer is "out for review".

  • qaz says:

    All of you who have had reviewers ask for extra experiments, have you ever tried saying "that's beyond the scope of our statements" or emailing/calling the editor and arguing that those experiments would not change the conclusions of the paper?

    It really sounds to me like people are expecting editors to be proactive. In my experience, editors are usually too busy to be proactive, but they are very capable of adjudicating a dispute between authors and reviewers.

    You shouldn't have to do the reviewer's suggested experiments, but you should have to answer the reviewer's concerns (presumably the experiments are driven by actual concerns). [One important thing that drives a lot of this (I think) is that it has now become a "valid concern" to say "this is a perfectly fine experiment, but is not cool enough for this journal". If that's the concern driving the suggested experiments, then I recommend trying to address the concern of coolness directly.)

  • Laurent says:

    @Jonathan Badger
    "The review process is slow because reviewers don't respond quickly. Which is something authors don't seem to get "

    I always give priority to reviewing over my own daily tasks since recently. Doing so often allows to send the review within two work days (a few more days if the manuscript presents stronger flaws). If it's harvest time or field season, I simply decline the review. (Fortunately enough I'm not asked to review too often).

    Colleagues say that's crazzy, either I'm losing precious time or they could not do the same for some other curious reason (the argument is really about prioritizing actually). I answer them that the day everybody do that, we'll all get reviewed in one week save a few exceptional situations. Everybody wish this to be true but nobody's making the simple decision.

  • JL says:

    @ Laurent,
    We get to pick when to accept new reviews, but not revisions. I usually take longer on revisions simply because they happen to take place at a time of the authors choosing, not mine. People often want to submit a manuscript right before a conference, probably so that the review time is counting while they are busy traveling. Surprise! reviewers are your colleagues and likely going to the same conference. A major journal in my field does not even bother asking if we agree to review a revision. You just get an email with the link and a two-week deadline.

    I have started tracking my review times, and I noticed two things: i) I actually take longer to do the reviews than I thought. ii) I review in about 50% of the time that it takes for me to receive reviews back. I am already contributing to make the system faster, while donating my time.

  • Laurent says:

    @JL
    You're right, I've missed the point mostly because manuscripts I review almost never fall into the revise category (the initial point of the post).

    That said, my latest came that way. I accepted to review a revision early morning. I checked the answer to (three) reviewers' comments and all had been adressed correctly or argued against reasonnably and decently. I reread the manuscript once. I was a little bit bothered that my own comments had been interpreted in the most limited way and realised the manuscript could have a go despite this. I wrote a single line review (basically, accept). The whole process did not take more than fifteen minutes (I'm not saying it should be that fast, just that it happened this way).

    If concerns have been addressed correctly, evaluating the revision should not take as much time as the first round. But then again, I'm in a field where you're never asked for more experiments to affect the story line. Sometimes you're asked whether you have more data (and tests) to provide stronger evidence to make the case (most often a consequence of salami slicing), but it's usually never something that translates into altering the manuscript that much.

  • bacillus says:

    Reviewing is art not science. There is no formula to follow except expertise, instinct and a "do unto others" mindset. For the record, I am in the habit of requesting additional animal experiments from authors, but only when they are critically required, can be done in less than 3 months, and are within their domain to accomplish.

    Publish long enough and you're bound to experience the Sergio Leone triumvirate. However, somethings I've seen with increasing frequency are i) authors who just take their papers to other journals complete with all of the original flaws, ii) authors who answer all comments that do not need additional experiments, but try to bluff their way out of doing any actual additional lab work (more often than not they turn into i), iii) editors who completely ignore any of the suggestions in my review.

    It really is getting to the stage where I wonder why I bother to review anymore, since everything is capable of being published given enough guile and or energy.

  • JL says:

    @ Laurent, most reviews take me more than 15min. I hope reviewers of my papers also take more than 15min.

  • Emaderton3 says:

    @ qaz

    I often use the "reviewer brings up a great point, but it is beyond the scope of this work because . . ." I have successfully used this in rebuttals and/or when first talking to editors prior to resubmission. I always feel like you have to put forward a "good faith" effort if you believe certain suggestions will make your paper better, but I also feel that you can hold the reviewers accountable if their demands are not realistic.

  • drugmonkey says:

    editors who completely ignore any of the suggestions in my review

    If this happens to you habitually, this is pretty clear evidence that your demands for additional experimentation are not justified. You should use this as feedback to calibrate your reviewing behavior.

  • drugmonkey says:

    Surprise! reviewers are your colleagues and likely going to the same conference.

    also see: start of semester teaching, grant deadlines, summer or winter holidays.....

  • drugmonkey says:

    have you ever tried saying "that's beyond the scope of our statements"

    absolutely. and even a mere "beyond the scope of this study".

    It works a lot of the time. Most of the time, actually. From that, one concludes that reviewers are poorly calibrated and/or jackholishly asking for more work just because they can, not because there is really any need.

  • jmz4 says:

    "have you ever tried saying "that's beyond the scope of our statements""
    -A variant of this that my PI whipped out to great effect is to indicate that the requested data is there or the experiments are underway, and that they are forming the basis of another story in the lab. It seems likely that reviewers are less likely to ask that the experiments be included if they think it will jeopardize another person's first author paper by cannibalizing work in the lab. It was also true, and a perfectly valid response.

    Obviously this only works for truly extraneous experiments of the variety that start with "It would be interesting to test..."

  • ecologist says:

    Here's a thought relative to this problem. When you lead and/or participate in "discussions" of papers in grad seminars or research group meetings, what counts as valid critical commentary, and what kind of criticisms get criticized themselves? In my experience, it seems that somehow the idea of "critical" discussion often devolves into students dragging out everything they can possibly imagine that might, perhaps, possibly, be wrong with a paper. Without being required to argue exactly how that critique actually affects the claims and conclusions of the paper in question. They are seldom asked to defend the relevance of their critique.

    It's easy to see a link between how we train students and how they eventually behave as reviewers.

  • A Salty Scientist says:

    @ecologist That's an excellent point. For our journal club discussions, we really need to arm our trainees with the ability to understand which weaknesses are trivial vs. the moderate-to-major weaknesses that actually affect the claims.

    As for reviewer demands for extra experiments, getting rid of "major revisions" will not accomplish this. We'll just get reviews stating to add extra experiments to go along with a rejection, and then rinse-repeat with another journal. The only way out is a culture change, where editors tell authors to ignore the calls for more experiments, and reviewers just stop with this shit.

  • Laurent says:

    @ JL,
    "most reviews take me more than 15min. I hope reviewers of my papers also take more than 15min."

    Yes, usually it does. This paper was already a good first shot, the three reviewers' comments were rather easy to implement, and since I'm very curious, I already had read them through carefully following the first decision (revise).

    The authors wrote a very precise answer to reviewers, either clearly indicating the changes brought up by the comments in the manuscript, or with very clear comments as to why they will not consider developping further some of the issues raised (in this case, another manuscript specifically dealing with the question at stake).

    So this was just about learning about the changes, and making sure the new version was flowing well, which it did. I only had to think about whether things unchanged were critically reducing the interest of reading that paper, and they did not (even if I'd have personnally prefered a less dry approach in the intro and discussion).

    Don't forget this was already the second round, not the first. This is indeed the advantage of keeping the reviewers between the firt try and the revised version: spare the time of discovering the study.

    This was also the only time in a decade that it occured so quickly, usually revisions take me more time to evaluate.

  • Mahmoud says:

    In My humble opinion, high IF journals reviewers when they decide for major reversion required; they really invested their time in order to make my paper more robust and readable. that what really happened in my case. their comments are almost reasonable and helpful. i did not yet encountered a reviewer who request major revisions for the sake of making my life hard.. and i hope i never met such reviewer. as well, one should never forget that those "free-tips" could increase the odds for acceptance in another little bet comparative journals..

Leave a Reply