Archive for the 'Peer Review' category

Are your journals permitting only one "major revision" round?

Skeptic noted the following on a prior post:

First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor's word, "he has to reject the paper because this was the revision."

This echoes something I have only recently heard about from a peer. Namely that a journal editor said that a manuscript was being rejected due to* it being policy not to permit multiple rounds of revision after a "major revisions" decision.

The implications are curious. I have not yet ever been told by a journal editor that this is their policy when I have been asked to review a manuscript.

I will, now and again, give a second recommendation for Major Revisions if I feel like the authors are not really taking my points to heart after the first round. I may even switch from Minor Revisions to Major Revisions in such a case.

Obviously, since I didn't select the "Reject" option in these cases, I didn't make my review thinking that my recommendation was in fact a "Reject" instead of the "Major Revisions".

I am bothered by this. It seems that journals are probably adopting these policies because they can, i.e., they get far more submissions than they can print. So one way to go about triaging the avalanche is to assume that manuscripts that require more than one round of fighting over revisions can be readily discarded. But this ignores the intent of the peer reviewer to large extent.

Well, now that I know this about two journals for which I review, I will adjust my behavior accordingly. I will understand that a recommendation of "Major Revisions" on the revised version of the manuscript will be interpreted by the Editor as "Reject" and I will supply the recommendation that I intend.

Is anyone else hearing these policies from journals in their fields?
__
*having been around the block a time or two I hypothesize that, whether stated or not, those priority ratings that peer reviewers are asked to supply have something to do with these decisions as well. The authors generally only see the comments and may have no idea that that "favorable" reviewer who didn't find much of fault with the manuscript gave them a big old "booooooring" on the priority rating.

47 responses so far

Is the J Neuro policy banning Supplemental Materials backfiring?

As you will recall, I was very happy when the Journal of Neuroscience decided to ban the inclusion of any Supplemental Materials in articles considered for publication. That move took place back in 2010.

Dr. Becca, however, made the following observation on a recent post:

I'm done submitting to J Neuro. The combination of endless experiment requests due to unlimited space and no supp info,

I find that to be a fascinating comment. It suggests that perhaps the J Neuro policy has been ineffectual, or even has backfired.

To be honest, I can't recall that I have noticed anything in a J Neuro article that I've read in the past few years that reminded me of this policy shift one way or the other.

How about you, Dear Reader? Noticed any changes that appear to be related to this banning of Supplemental Materials?

For that matter, has the banning of Supplemental Materials altered your perception of the science that is published in that journal?

44 responses so far

George Carlin theory of peer review

Dec 03 2014 Published by under Conduct of Science, Grant Review, Peer Review

Everyone who is more approving or lenient than you are is an incompetent moron.

Everyone that is harsher or less enthusiastic is a total jackhole.

12 responses so far

Once again, it is not pre-publication peer review that is your problem

The perennial discussion arose on the Twitts yesterday, probably sparked by renewed ranting from dear old @mbeisen.

For the uninitiated, a brief review of the components that go into the pre-publication approval of a scientific manuscript. In general, authors select a journal and submit a manuscript that they feel is ready for publication*. At this point an Editor (usually there is one Editor in Chief and a handful or so of sub-Editors usually referred to as Associate Editors; AEs) gives it a look and decides whether to 1) accept it immediately, 2) reject it immediately or 3) send it to peer scientists for review. The first option is exceptionally rare. The second option depends on both the obvious "fit" of the manuscript for the journal in terms of topic (e.g., "Mathematical Proof of Newton's Fifth and Sixth Laws" submitted to "Journal of Trangender Sociology") and some more-rarified considerations not immediately obvious to the home viewer.
Continue Reading »

22 responses so far

A dump journal is not an indiscriminate garbage heap

Sep 02 2014 Published by under Peer Review

I use the phrase "credible application" a lot when I talk about grant submission.

It holds true for manuscript submission as well.

I suspect some people may think that what they perceive as a "dump journal" of last resort will not require the most polished of manuscripts. This isn't true in my experience. Dump journal papers may be limited in scope but it is a mistake to think the journal will take nonsensical crap which has been prepared without much care.

If nothing else, in your snobbery, consider that the reviewers are going to be more likely to think that Acta Bunnica Hoppica Scandinavica Part C is a respectable journal and less likely to think it is a dumping ground. So if you send up something that has been only cursorily prepared, this is going to be an insult to them personally.

This can be the difference between "Major Revisions, resend for review" and "Minor Revisions". This can be the difference between "Reject" and something less final.

13 responses so far

Soundtrack for getting your Reviewer #3 on

No responses yet

A can't-miss inquiry to Editor following the initial review of your paper

Jul 23 2014 Published by under Careerism, Conduct of Science, Peer Review

Dear Editor Whitehare,

Do you really expect us to complete the additional experiments that Reviewer #3 insisted were necessary? You DO realize that if we did those experiments the paper would be upgraded enough that we sure as hell would be submitting it upstream of your raggedy ass publication, right?

Collegially,
The Authors

22 responses so far

Sex differences in K99/R00 awardees from my favorite ICs

Jul 21 2014 Published by under Grantsmanship, NIH, NIH Careerism, NIH funding, Peer Review

Datahound has some very interesting analyses up regarding NIH-wide sex differences in the success of the K99/R00 program.

Of the 218 men with K99 awards, 201 (or 92%) went on to activate the R00 portion. Of the 142 women, 127 (or 89%) went on to these R00 phase. These differences in these percentages are not statistically different.

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).

Yowza.

So per my usual, I'm very interested in what the ICs that are closest to my lab's heart have been up to with this program. Looking at K99 awardees from 07 to 09 I find women PIs to constitute 3/3, 1/3 and 2/4 in one Institute and 1/7, 2/6 and 5/14 in the other Institute. One of these is doing better than the other and I will just note that was before the arrival of a Director who has been very vocal about sex discrimination in science and academia.

In terms of the conversion to R01 funding that is the subject of Datahound's post, the smaller Institution has decent outcomes* for K99 awardees from 07 (R01, R21, nothing), 08 (R01, R01, R01) and 09 (P20 component, U component, nothing, nothing).

In the other Institute, the single woman from 07 did not appear to convert to the R00 phase but Google suggests made Assistant Professor rank anyway. No additional NIH funding. The rest of the 07 class contains 4 with R01 and two with nothing. In 08, the women PIs are split (one R01, one nothing) similar to the men (2 R01, 2 with nothing). In 09 the women PIs have two with R01s, one R03 and two with nothing.

So from this qualitative look, nothing is out of step with Datahound's NIH-wide stats. There are 14/37 women PIs, this 38% is similar to the NIH-wide 39% Datahound quoted although there may be a difference between these two ICS (30% vs 60%) that could stand some inquiry. One of 37 K99 awardees failed to convert to R00 from the K99 (but seems to be faculty anyway). Grant conversion past the R00 is looking to be roughly half or a bit better.

I didn't do the men for the 2009 cohort in the larger Institute but otherwise the sex differences in terms of getting/not getting additional funding beyond the R00 seems pretty similar.

I do hope Datahound's stats open some eyes at the NIH, however. Sure, there are reasons to potentially excuse away a sex difference in the rates of landing additional research funding past the R00. But I am reminded of a graph Sally Rockey posted regarding the success rate on R01-equivalent awards. It showed that men and women PIs had nearly identical success rates on new (Type 1) proposals but slightly lower success on Renewal (Type 2) applications. This pastes over the rates of conversion to R00 and the acquisition of additional funding, if you squint a bit.

Are women overall less productive once they've landed some initial funding? Are they viewed negatively on the continuation of a project but not on the initiation of it? Are women too humble about what they have accomplished?
__
*I'm counting components of P or U mechanisms but not pilot awards.

15 responses so far

Scientific peer review is not broken, but your Glamour humping ways are

I have recently had a not-atypical publishing experience for me. Submitted a manuscript, got a set of comments back in about four weeks. Comments were informed, pointed a finger at some weak points in the paper but did not go all nonlinear about what else they'd like to see or whinge about mechanism or talk about how I could really increase the "likely impact". The AE gave a decision of minor revisions. We made them and resubmitted. The AE accepted the paper.

Boom.

The manuscript had been previously rejected from somewhere else. And we'd revised the manuscript according to those prior comments as best we could. I assume that made the subsequent submission go smoother but it is not impossible we simply would have received major revisions for the original version.

Either way, the process went as I think it should.

This brings me around to the folks who think that peer review of manuscripts is irretrievably broken and needs to be replaced with something NEW!!!!11!!!.

Try working in the normal scientific world for awhile. Say, four years. Submit to regular journals edited by actual working peer scientists. ONLY. Submit to journals of pedestrian and/or unimpressive Impact Factor (that would be the 2-4 range from my frame of reference). Submit interesting stories- whether they are "complete" or "demonstrate mechanism" or any of that bullshit. Then submit the next part of the continuing story you are working on. Repeat.

Oh, and make sure to submit to journals that don't require any page charge. Don't worry, they exist.

Give your trainees plenty of opportunity to be first author. Give them lots of experience writing and allow them to put their own thoughts into the paper..after all, there will be many more of them to go around.

See how the process works. Evaluate the quality of review. Decide whether your science has been helped or hindered by doing this.

Then revisit your prior complaints about how peer review is broken.

And figure out just how many of them have more to do with your own Glamour humping ways than they do with anything about the structure of Editor managed peer-review of scientific manuscripts.

__
Also see Post-publication peer review and preprint fans

18 responses so far

Quality of grant review

Jun 13 2014 Published by under Grant Review, NIH funding, Peer Review

Where are all the outraged complaints about the quality of grant peer review and Errors Of Fact for grants that were scored within the payline?

I mean, if the problem is with bad review it should plague the top scoring applications as much as the rest of the distribution. Right?

47 responses so far

Older posts »