Thought of the day

Dec 05 2014 Published by under Replication, ReplicationCrisis, Science Publication

One thing that always cracks me up about manuscript review is the pose struck* by some reviewers that we cannot possibly interpret data or studies that are not perfect.

There is a certain type of reviewer that takes the stance* that we cannot in any way compare treatment conditions if there is anything about the study that violates some sort of perfect, Experimental Design 101 framing even if there is no reason whatsoever to suspect a contaminating variable. Even if, and this is more hilarious, if there are reasons in the data themselves to think that there is no effect of some nuisance variable.

I'm just always thinking....

The very essence of real science is comparing data across different studies, papers, paradigms, laboratories, etc and trying to come up with a coherent picture of what might be a fairly invariant truth about the system under investigation.

If the studies that you wish to compare are in the same paper, sure, you'd prefer to see less in the way of nuisance variation than you expect when making cross-paper comparisons. I get that. But still....some people.

Note: this is some way relates to the alleged "replication crisis" of science.
__
*having nothing to go on but their willingness to act like the manuscript is entirely uninterpretable and therefore unpublishable, I have to assume that some of them actually mean it. Otherwise they would just say "it would be better if...". right?

8 responses so far

  • pinus says:

    oh, did I send you some of my recent bs reviews?

  • drugmonkey says:

    oh, it isn't just me?

  • jmz4 says:

    Yes! Especially given the amount of time addressing those concerns can take. Sometimes I feel like the reviewers think very little of the people that read these journals. Since you know, they don't trust them to be able to interpret anything for themselves.

  • Juan Lopez says:

    One of my previous bosses had this thing when reviewing papers: if he could think of any better way to do the experiment or test the hypotheses, the paper was to be rejected. No matter that those ideas were not practical or needed five times the effort or cost too much. A paper could eventually be accepted if they acknowledged that it was severely limited because what really needed to be done was what he was doing. I know more than a few people who left the field because of him. Sad.

    There are many more McKnight waiting for the microphone. Glad they don't have it.

  • drugmonkey says:

    I just don't understand how those people missed the way that science advances.

  • jmz4 says:

    "I just don't understand how those people missed the way that science advances."

    -By being curious in very little outside of their immediate field of interests/accomplishments. I used to hate the history of science parts of grad classes, now I really wish I had gotten more of it.

  • E rook says:

    This happens to me a lot. In grant apps too. The data in one figure uses two doses of something. But in another figure, only one dose (twice the effort & cost to get essentially the same result), therefore reject. Or the preliminary data used This method to test / publish about gene X, but you're studying, gene Y ..... it it unclear what you are trying to communicate in this figure which is unhelpful lay titled, "proof of principle to use this method."

  • The Other Dave says:

    Young reviewers. Some people feel like being hypercritical makes them smarter. Or that they have been entrusted with the great responsibility of guarding the gates to the scientific literature, and need to take it uber-seriously.

    After a while, we relax and decide whether or not it's just something that would be useful in print. It's not up to the reviewer to decide whether it's appropriate for the journal. That's the editor's job. Editors who ask reviewers to decide that are bad lazy editors.

Leave a Reply