A now somewhat older post of drdrA's over at Blue Lab Coats covers a recent manuscript rejection received by the laboratory. In discussing the reviewer criticisms of the manuscript, the post alludes in several places to reviewers asking for a substantial amount of additional work to be conducted. I picked up on this in a brief snark, however the critical issue was better expressed by commenter BugDoc:
I'm really concerned about what appears to be a growing trend for reviewers to ask for years worth of revisions, which could often be an additional paper. We will sometimes pull out the old standby "...beyond the scope of this paper", but I'm curious to know if there are other rebuttal strategies with which to deflect reviews aimed at having you compress the work of an entire career into one paper.
I concur with the first sentiment, although I'd probably substitute "really, really, really annoyed" for "really concerned" if I were in a venue in which I was inhibited from expressing myself in physioproffian terms.
As PhysioProf already noted in his piece responding to this post of drdrA's, one cannot get too caught up in what the reviewers have to say about your paper. Even if the Editor initially goes with a "rejection" decision, said Editor is still often open to a substantive plea for reconsideration. In any case, as Orac noted:
You don't always have to do what the reviewers demand when you resubmit. There have been more than one occasion when I've seen reviewer requests that were clearly unreasonable and, in my response, I politely said why we weren't going to do what the reviewers asked. If you can justify why you didn't do what the reviewers ask with a reasonable explanation, you can often get your paper published without doing a whole bunch of extra experiments.
Exactly. Your ultimate audience is the Editor, not the reviewers and I would hope most trainees learn this in the course of their first one or two first-author submissions. So just because a reviewer is recommending that a large number of new experiments would greatly improve the paper, the authors do not necessarily have to comply in order to get the paper accepted by the editor. Nevertheless it is indeed the case that reviewers seem quite fond of asking authors to provide a substantial amount of new data to a manuscript that the authors clearly thought described a body of work that is worthy of publication. In many cases an amount of additional work that the authors find not just irritating but unreasonably demanding and out of step with career and funding realities for PI and trainees alike. This latter concern pushes YHN's buttons.
First, a confession. As a reviewer of papers, I've been known to ask for more data myself. To intimate that a paper wasn't up to publication quality in my view simply because of a lack of scope of the data presented. Herein lies the sharpest point of the discussion between reviewers and authors is it not? Because science is an ongoing process of incremental discovery and no finite set of experimental results ever really exhaust a question. It is therefore quite natural that different individuals would come to different conclusions about what amount of progress along a line of investigation represents "a paper". (Discussion over the so-called "Least Publishable Unit" is an old tradition in academia, of course, but I was a bit surprised in writing an older post that Wikipedia has an entry for the LPU. ) So despite the fact that I probably lean toward a least-publishable unit approach myself (from a certain perspective, of course) I do have some threshold for publication worthy amounts of data.
Of course, we all couch this to ourselves in terms of different things but something in the nature of "a complete story" runs through most of us. Which is utterly laughable as a workable standard. We very quickly run into questions of arbitrary taste, tradition and levels-of-analysis. And of course, the worst possible reason for demanding more data: "Well, I include five times more data in each of my publications so everyone else should have to as well". What is "traditional" in your given subfield or for a given journal is perhaps about the best we can do. Trouble is, what if that changes over time?
As I expressed in a comment to drdrA's post, I believe that there are some ways in which demands for increasing the amount of data, scope or type (\() of assays included to be a nasty mechanism by which the powerful in science perpetuate their own position. Note that I do not say that this is even an explicit process and obviously it does not apply each and every person who gets a paper published in a GlamourMag. To put it as neutrally as possible, publication in the top general science magazines is an arms race in which the constant one-upsmanship of breadth of the data, inclusion of the latest and greatest techniques and sheer depth of the edifice of prior work on which the end-product must be based tends to increase the bar for what is required for a paper to be considered sufficiently "complete".
It used to be the case that if you knocked out a gene in a mouse and had one phenotype you were good to go. Now? Well you'd better have the conditional knockout, isolated to particular cell populations, generate at least three or four systems worth of whole-organism phenotyping, silence, rescue, do a bunch of increasingly irrelevant so-called "mechanism" experiments in in vitro systems and gene array everything under the sun. OK, I perhaps exaggerate. But not much. And the point is that there are structural and technical "requirements" for a paper to be acceptable for a top-flight journal which have very little to do with the real quality and significance of the work.
The thing for those of us in subfields which have been less affected in this way to consider is can it happen. It assuredly could. In behavioral pharmacology, for example, would we look to a FDA-approval like standard and say "Well, that result in your rodents is very nice and all but we really need some backup in a second and larger species like dog, swine or nonhuman primate"? Ridiculous, right? But really what is the difference? So the molecular nutters start insisting on gene array this, no, no that's old hat, Chippie-Chip that, whoops, what's all this Selexa sequencing? blah. blah. New tools are available, they do kewl stuff, people scramble to use them (application is secondary of course) and next thing you know, you can't publish very highly without them!. So these aforementioned behavioral pharm studies in now two species (including one of the expensive ones), well, now you better have PET occupancy data too. I mean, c'mon, it makes sense doesn't it? There is very little argument that these sorts of additions would make the science that much stronger, impressive, general and all that good stuff. We could go there, we just haven't. How about your comfy little subfield, DearReader?
This is where it strikes me that certain intentional or even unintentional processes which demand more and more and more go into a paper have the tendency to constrain to a higher degree who can publish in a certain journal. Constraining on the basis of laboratory resources rather than brilliant insight or clever experimental design. And this is not necessarily a good thing.
One of the more frustrating aspects of the debate over "scope" is the push and pull between paper review/acceptance and grant review/funding. For most people the two are linked in fairly direct ways. You can only do the science you can afford in terms of grant dollars paying for supplies and personnel and institutional support in terms of access to big-ticket equipment or other resources. In order to get the grant dollars from the NIH, you have to publish papers and the nature of those papers counts. The higher the Impact Factor of the journals you publish in, the easier the grant money is to pry loose (on a field-normalized basis). If your papers are considered more substantial, deeper, broader or whatever, the easier the grant money is to pry loose. The hotter and more exciting your work...well, you get the picture.
Lest this seem like a us/them screed, let us draw back and recognize that on every level there is some of this at work. Inevitably, our conception of what is a complete paper or a good paper is shaped by what we are doing ourselves. After all, people should have to go to the same trouble as we go to, at a minimum, to deserve that paper, should they not? Or be at least as good as us to deserve a grant, no? It is only fair.
You see where this leads, though, don't you? I mean even at my modest level, I am capable of pulling off a breadth of work that I couldn't have done when I was just starting my independent career. Should I hold New Investigators to my current standard? Is that really fair? Or flip it around, should we hold much more senior people to a an even higher standard just because they are so much more capable? WetEar Prof, you get a three-experiment paper but Geezer Prof, you better have a dozen in there!
The fact that the last handful of papers I reviewed suffered severely from lack of extensive content has nothing to do with this. Neither does the really sweet experience I've been going through with the grant reviewers wanting to know where some pubs are (before the \)) and the reviewers of a related paper wanting an R01's worth of data for acceptance. Nothing at all. I swear.