I think about it primarily in the form of career stage representation, as always. I like to get reviewed by people who understand what it means to me to request multiple additional experiments, for example.
and I responded:
Are you implying that differential (perceived/assumed) capability of the laboratory to complete the additional experiments should affect paper review comments and/or acceptance at a particular journal?
I'm elevating this to a post because I think it deserves robust discussion.
I think that the assessment of whether a paper is 1) of good quality and 2) of sufficient impact/importance/pizzazz/interest/etc for the journal at hand should depend on what is in the manuscript. Acceptance should depend on the work presented, for the most part. Obviously this is were things get tricky because there is critical difference here:
@drugmonkeyblog depends: additional = needed to support conclusions, or additional = I wish you were interested in my thing.
— Michael Hendricks (@MHendr1cks) February 25, 2016
This is the Justice Potter Stewart territory, of course. What is necessary to support and where lies the threshold for "I just wanna know this other stuff"? Some people have a hard time disentangling their desire to see a whole 'nother study* from their evaluation of the work at hand. I do recognize there can be legitimate disagreement around the margin but....c'mon. We know it when we see it**.
There is a further, more tactical problem with trying to determine what is or is not possible/easy/quick/cheap/reasonable/etc for one lab versus another lab. In short, your assumptions are inevitably going to be wrong. A lot. How do you know what financial pressures are on a given lab? How do you know, by extension, what career pressures are on various participants on that paper? Why do you, as an external peer reviewer, get to navigate those issues?
Again, what bearing does your assessment of the capability of the laboratory have on the data?
*As it happens, my lab just enjoyed a review of this nature in which the criticism was basically "I am not interested in your [several] assays, I want to see what [primary manipulation] does in my favorite assays" without any clear rationale for why our chosen approaches did not, in fact, support the main goal of the paper which was to assess the primary manipulation.
**One possible framework to consider. There are data on how many publications result from a typical NIH R01 or equivalent. The mean is somewhere around 6 papers. Interquartile range is something like 3-11. If we submit a manuscript and get a request to add an amount of work commensurate with an entire Specific Aim that I have proposed, this would appear to conflict with expectations for overall grant productivity.