# "...would be beyond the scope of this paper"

A now somewhat older post of drdrA's over at Blue Lab Coats covers a recent manuscript rejection received by the laboratory. In discussing the reviewer criticisms of the manuscript, the post alludes in several places to reviewers asking for a substantial amount of additional work to be conducted. I picked up on this in a brief snark, however the critical issue was better expressed by commenter BugDoc:

I'm really concerned about what appears to be a growing trend for reviewers to ask for years worth of revisions, which could often be an additional paper. We will sometimes pull out the old standby "...beyond the scope of this paper", but I'm curious to know if there are other rebuttal strategies with which to deflect reviews aimed at having you compress the work of an entire career into one paper.

I concur with the first sentiment, although I'd probably substitute "really, really, really annoyed" for "really concerned" if I were in a venue in which I was inhibited from expressing myself in physioproffian terms.

As PhysioProf already noted in his piece responding to this post of drdrA's, one cannot get too caught up in what the reviewers have to say about your paper. Even if the Editor initially goes with a "rejection" decision, said Editor is still often open to a substantive plea for reconsideration. In any case, as Orac noted:

You don't always have to do what the reviewers demand when you resubmit. There have been more than one occasion when I've seen reviewer requests that were clearly unreasonable and, in my response, I politely said why we weren't going to do what the reviewers asked. If you can justify why you didn't do what the reviewers ask with a reasonable explanation, you can often get your paper published without doing a whole bunch of extra experiments.

Exactly. Your ultimate audience is the Editor, not the reviewers and I would hope most trainees learn this in the course of their first one or two first-author submissions. So just because a reviewer is recommending that a large number of new experiments would greatly improve the paper, the authors do not necessarily have to comply in order to get the paper accepted by the editor. Nevertheless it is indeed the case that reviewers seem quite fond of asking authors to provide a substantial amount of new data to a manuscript that the authors clearly thought described a body of work that is worthy of publication. In many cases an amount of additional work that the authors find not just irritating but unreasonably demanding and out of step with career and funding realities for PI and trainees alike. This latter concern pushes YHN's buttons.
First, a confession. As a reviewer of papers, I've been known to ask for more data myself. To intimate that a paper wasn't up to publication quality in my view simply because of a lack of scope of the data presented. Herein lies the sharpest point of the discussion between reviewers and authors is it not? Because science is an ongoing process of incremental discovery and no finite set of experimental results ever really exhaust a question. It is therefore quite natural that different individuals would come to different conclusions about what amount of progress along a line of investigation represents "a paper". (Discussion over the so-called "Least Publishable Unit" is an old tradition in academia, of course, but I was a bit surprised in writing an older post that Wikipedia has an entry for the LPU. ) So despite the fact that I probably lean toward a least-publishable unit approach myself (from a certain perspective, of course) I do have some threshold for publication worthy amounts of data.
Of course, we all couch this to ourselves in terms of different things but something in the nature of "a complete story" runs through most of us. Which is utterly laughable as a workable standard. We very quickly run into questions of arbitrary taste, tradition and levels-of-analysis. And of course, the worst possible reason for demanding more data: "Well, I include five times more data in each of my publications so everyone else should have to as well". What is "traditional" in your given subfield or for a given journal is perhaps about the best we can do. Trouble is, what if that changes over time?
As I expressed in a comment to drdrA's post, I believe that there are some ways in which demands for increasing the amount of data, scope or type () and the reviewers of a related paper wanting an R01's worth of data for acceptance. Nothing at all. I swear.

• Coturnix says:

If you are out of money, out of the lab, out of the career trajectory, yet have old data that others should see, shouldn't they get published somewhere?

• Becca says:

totally off topic (not to say your post wasn't interesting and informative as usual, DM!)...
CAGE MATCH! Sunday, Sunday, SUNDAY!
See WetEar Prof Vs. Geezer Prof in a fight to the finish! No submissions allowed, KO only!
Next on "Who Gets Funded?"
(it's probably time for me to go home and get some sleep)

• whimple says:

It's a good topic, and "beyond the scope of this paper" is a perfectly reasonable thing to say.
On the flipside, when I'm reviewing papers, I tend *not* to ask for additional data. If I feel crucial controls are lacking, that's one thing, but just asking for *more* is not cool. Instead, if I feel *more* is needed, rather than *better*, I recommend rejection and publication in a less prestigious journal.
Conversely, when I'm publishing papers, I tend not to comply with requests for *more*. I try to explain why what I have is a good story as it stands to the editor (as DM astutely points out, the editor's opinion is the ONLY one that counts). If that doesn't work, I take my paper and walk. Publishing now somewhere (anywhere) is so much better than chasing the fool's gold that is trying to please a capricious reviewer.

• Sigmund says:

The last paper I sent in resulted in comments where the reviewers took absolutely contradictory stances - one asking for more work done on one part of the topic and the other asking for less data on this point. The editor told us we would have to comply with ALL the points of the reviewers if we wanted the paper accepted.
We actually did the necessary experiments and wrote a new version of the manuscript and a letter pointing out that we cannot really both extend and delete the particular contentious point. The reviewers then accepted the manuscript but the editor himself came up with a completely new point that none of the reviewers had raised in their comments and we had to start off a new series of experiments that took about three months to complete - the entire process taking about over a year from submission to acceptance.
And this was from a fairly modest impact factor journal.

• PhysioProf says:

The last paper I sent in resulted in comments where the reviewers took absolutely contradictory stances - one asking for more work done on one part of the topic and the other asking for less data on this point. The editor told us we would have to comply with ALL the points of the reviewers if we wanted the paper accepted.

There is a high-impact journal in my field that pulls this shit all the fucking time. "This is a solid study, but in order to merit publication we would need to see more experiments that get at mechanism." I have, on occasion, spent over a year on a multiple-round back-and-forth with these fuckers doing more and more and more experiments, only to end up rejected at the end.

So the molecular nutters start insisting on gene array this, no, no that's old hat, Chippie-Chip that, whoops, what's all this Selexa sequencing? blah. blah. New tools are available, they do kewl stuff, people scramble to use them (application is secondary of course) and next thing you know, you can't publish very highly without them!

My own research is heavily leveraged off of novel tool development within my own lab, so I mostly avoid this, and do not go chasing after the latest, greates techniques that--oh, by the way--coincidentally seem to involve buying expensive equipment or paying exorbitant licensing fees to use.

• BugDoc says:

Very very helpful perspective, DM. My graduate advisor's approach was to do everything you possibly can to address reviewers' comments regardless of whether you think it's a good idea or not, so I've obviously internalized that philosophy. However, I'm now rethinking that approach, based on the reviewer phenomenon described in the post. When I review manuscripts, I ask for the experiments or controls that are reasonably needed to support the conclusions that the authors themselves made, rather than asking the authors to extend their study to make claims that I think they should make.
Having said that, regardless of how unreasonable reviewer requests might be....many of the editors I deal with are colleagues or acquaintances (or people that I would like to develop a working acquaintance with), and thus I thought I should try to avoid getting a reputation as the type that tries to get out of doing any additional work. Obviously there's a balance to be struck, and I'm getting the sense from the comments here at at the previous "Rejection" post that I've probably been too conservative with rebuttals, and that others have been pretty successful setting limits on what they are willing to do for revision.

• neurolover says:

Beautifully insightful post, DrugMonkey. I think part of the problem, though, is that the power labs are doing more work. Perhaps we're seeing the slow death of small science? I've seen what you describe, jokingly in your knockout example, and the CNS manuscripts that come out of them are arguably better, no? And, of course, impossible for a new investigator to accomplish, requiring as they do an army of high level researchers who can do the different approaches well.
I've heard the same thing from reviewers of grants, even those who really want to help the young investigators along -- when they're looking at the proposals, there's just no competition between what the new guy is proposing compared to the established guy. Applying the same standard to everyone, though, means that the new guy is never going to get past being a new guy.
Young investigator grants are a help, but they don't let you attract the personnel to do the research army work, even if you have some money.
This is where I do worry that the system is broken -- that the PI centered lab is dying off, and that we haven't figured out how the big PI lab works in the long run yet. Maybe we should be looking for some clues from physics, where I think we might be heading.

•  Support level Reader : $5.00 USD - monthly Supporter :$10.00 USD - monthly Sustainer : $25.00 USD - monthly Angel :$1,200.00 USD - yearly
• Scientopia Blogs

• DrugMonkey is an NIH-funded researcher who blogs about careerism in science. And occasionally about the science of drug use.

• Your donation helps to support the operation of Scientopia - thanks for your consideration.