Are your journals permitting only one "major revision" round?

(by drugmonkey) Jan 21 2015

Skeptic noted the following on a prior post:

First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor's word, "he has to reject the paper because this was the revision."

This echoes something I have only recently heard about from a peer. Namely that a journal editor said that a manuscript was being rejected due to* it being policy not to permit multiple rounds of revision after a "major revisions" decision.

The implications are curious. I have not yet ever been told by a journal editor that this is their policy when I have been asked to review a manuscript.

I will, now and again, give a second recommendation for Major Revisions if I feel like the authors are not really taking my points to heart after the first round. I may even switch from Minor Revisions to Major Revisions in such a case.

Obviously, since I didn't select the "Reject" option in these cases, I didn't make my review thinking that my recommendation was in fact a "Reject" instead of the "Major Revisions".

I am bothered by this. It seems that journals are probably adopting these policies because they can, i.e., they get far more submissions than they can print. So one way to go about triaging the avalanche is to assume that manuscripts that require more than one round of fighting over revisions can be readily discarded. But this ignores the intent of the peer reviewer to large extent.

Well, now that I know this about two journals for which I review, I will adjust my behavior accordingly. I will understand that a recommendation of "Major Revisions" on the revised version of the manuscript will be interpreted by the Editor as "Reject" and I will supply the recommendation that I intend.

Is anyone else hearing these policies from journals in their fields?
__
*having been around the block a time or two I hypothesize that, whether stated or not, those priority ratings that peer reviewers are asked to supply have something to do with these decisions as well. The authors generally only see the comments and may have no idea that that "favorable" reviewer who didn't find much of fault with the manuscript gave them a big old "booooooring" on the priority rating.

47 responses so far

Excellent observation on only funding the absolutely most amazing science

(by drugmonkey) Jan 21 2015

Over at Rock Talk, by a Joel MacAuslan:

It isn’t about whether to fund only the “best” science: I really DON’T want only Isaac Newtons and Louis Pasteurs to be competitive, and to be able to spend their careers on this research. That’s because I don’t want to wait 200 years for all that “great” science to trickle through society. Fund lots and lots of very good science, and cure heart disease in 40 years, instead!

5 responses so far

Is the J Neuro policy banning Supplemental Materials backfiring?

(by drugmonkey) Jan 20 2015

As you will recall, I was very happy when the Journal of Neuroscience decided to ban the inclusion of any Supplemental Materials in articles considered for publication. That move took place back in 2010.

Dr. Becca, however, made the following observation on a recent post:

I'm done submitting to J Neuro. The combination of endless experiment requests due to unlimited space and no supp info,

I find that to be a fascinating comment. It suggests that perhaps the J Neuro policy has been ineffectual, or even has backfired.

To be honest, I can't recall that I have noticed anything in a J Neuro article that I've read in the past few years that reminded me of this policy shift one way or the other.

How about you, Dear Reader? Noticed any changes that appear to be related to this banning of Supplemental Materials?

For that matter, has the banning of Supplemental Materials altered your perception of the science that is published in that journal?

44 responses so far

NIGMS will now consider PIs' "substantial unrestricted research support"

(by drugmonkey) Jan 14 2015

According to the policy on this webpage, the NIGMS will now restrict award of its grants when the applicant PI has substantial other research support. It is effective as of new grants submitted on or after 2 Jan, 2015.

The clear statement of purpose:

Investigators with substantial, long-term, unrestricted research support may generally hold no more than one NIGMS research grant.

The detail:

For the purposes of these guidelines, investigators with substantial, long-term, unrestricted support (“unrestricted investigators”) would have at least $400,000 in unrestricted support (direct costs excluding the principal investigator’s salary and direct support of widely shared institutional resources, such as NMR facilities) extending at least 2 years from the time of funding the NIGMS grant. As in all cases, if NIGMS funding of a grant to an investigator with substantial, long-term, unrestricted support would result in total direct costs from all sources exceeding $750,000, National Advisory General Medical Sciences Council approval would be required

This $400,000 limit, extending for two years would appear to mean $200,000 per year in direct costs? So basically the equivalent of a single additional R01-worth of direct cost funding?

I guess they are serious about the notion that two grants is fine but three-R01-level funding means you are a greedy commons-spoiling so-and-so.

51 responses so far

Chapeau

(by drugmonkey) Jan 14 2015

Occasionally you notice one of your colleagues pulling off something you hope for within your own group.

When your manuscript gets rejected from one journal you would typically submit it to an approximately equal* journal next, hoping to get a more favorable mix of AE and reviewers.

If you've worked up more data that could conceivable fit with the rejected set, maybe you would submit upward, trying a journal with a better reputation.

What is slightly less-usual is taking the same manuscript, essentially unrevised and submitting it to a journal of better reputation or JIF or whathave you.

Getting that self-same manuscript accepted, essentially unchanged, is a big win.

Chapaeau, my friends.

__
*You do this, right?

13 responses so far

"Well, I don't know if I believe anyone is 100% a dick..."

(by drugmonkey) Jan 13 2015

I rescued a comment from the spam filter which addressed an older post on Scott Kern. As a reminder he's the researcher that published a long commentary wondering why kids these days weren't devoting insane hours in the lab anymore. He intimated that if you weren't spending your every working minute trying to cure childhood cancers you were a bit of a heel.

I disagreed.

But, in the spirit of Rhomann Dey I think it is important that you review the comment offered up by Jessa:

I spent years training with Scott in his pancreatic cancer lab at Hopkins. He was an incredible mentor and a natural ability to see things from odd angles that the rest of us overlooked.

Sure, sure. I'm willing to believe a dude comes across in a random, one-off frustrated rant as a bit more of a jerk than he really is to those who know him.

But, more than any other feature of my time with Scott, one thing stood above all. He is a fantastic Dad. He was OUT of there at 5:30pm. Granted his day started at 4am (I remember that he always joked that he woke up at 4am, once the coffee from 3am kicked in). He was at the dinner table every night, he made it a priority.

With such a great example and mentoring from such a swell guy, one might wonder why this person is no longer in science?

I have since left science in favor of being a stay at home mom. One thing I noticed while I was training is that the women mentors in the field were not home tucking in their babies. I distinctly remember standing on the dark sidewalk looking up at a bright lab window and seeing a woman faculty member in the lab. I thought "what are her little girls doing right now" I thought--I can't be like that.

Interesting. Look, I'm not going to question the way people choose to organize their lives and if 100% every night at home tucking in the tykes was the priority for this commenter, so be it. But she knows jack squatte from looking in a window at one faculty member. Maybe this person had a sharing arrangement with her spouse and on the next night would be home doing the tucking. Maybe this was a rare crunch week before a grant was due, a paper re-submit was coming together or she had a high profile talk to prepare for. Maybe tenure was fast approaching. Point being that many modern two-professional (yes even two-academic) parenting couples make a more balanced approach work. A more shared approach. Where both parties do some of the dinner making, some of the getting the kids out the door to school, some of the soccer practices and, yes, some of the reading of Goodnight Moon, and other classics.

This is a convenient time to review my observation from the original post on St. K3rn.

This sums up all that is wrong with these jerks (Kern is not alone in this "kids these days should spend more time in the lab" nonsense). Their obsessive vocational approach to science was made possible in many cases by a spouse who picked up the pieces for them at home. In sadly too many more cases, Obsessive Vocational Scientist Man operated at the expense of children who had a Dad who was never around, couldn't make the weekend soccer game, was constantly out of town on business and had to hide out in his study when he did manage to stay at home for a few hours.

The younger generations have chosen a different path. Deal, old grumpy dude. Deal.

Out of the house by 4am? And he managed to make it "at the dinner table" at 5:30?

Sorry but this evidence rather supports my presumption that Saint Kern has a stay-at-home spouse, or at least a spouse that picks up the vast majority of the workaday duties.

And his blathering about obsessive vocational behavior is rooted in the fact that he's bailing on so much of ACTUAL life. Screw that.

p.s., Male scientists want to be involved dads, but few are

Sarah Damaske, Elaine Howard Ecklund, Anne E. Lincoln and Virginia J. White Male Scientists’ Competing Devotions to Work and Family: Changing Norms in a Male-Dominated Profession, 2014, Work and Occupations, doi: 10.1177/0730888414539171

22 responses so far

Day in the life

(by drugmonkey) Jan 09 2015

soooooo much analysis of data points today. got kind of fired up by something that occurred on the way in so I ended up pulling data threads. and another. and another.

By the time I looked up it was time to go get the kids.

FTW but man are my eyes tired.

3 responses so far

More on NIGMS's call for "shared responsibility"

(by drugmonkey) Jan 07 2015

The post from NIGMS Director Lorsch on "shared responsibility" (my blog post) has been racking up the comments, which you should go read.

As a spoiler, it is mostly a lot of the usual, i.e., Do it to Julia!

But two of the comments are fantastic. This one from anonymous really nails it down to the floor.

More efficient? My NOA for my R01 came in a few weeks ago for this year, and as usual, it has been cut. I will get ~$181,000 this year. Let’s break down the costs of running a typical (my) lab to illustrate that which is not being considered. I have a fairly normal sized animal colony for my field, because in immunology, nothing gets published well without knockouts and such. That’s $75,000 a year in cage per diem costs. Let’s cover 20% of my salary (with fringe, at 28.5%), one student, and one postdoc (2.20 FTE total). Total salary costs are then $119,800. See, I haven’t done a single experiment and my R01 is gone. How MORE efficient could I possibly be? Even if we cut the animals in half, I have only about $20,000 for the entire year for my reagents. Oh no, you need a single ELISA kit? That’s $800. That doesn’t include plates? Hell, that’s another $300. You need magnetic beads to sort cells, that’s $800 for ONE vial of beads. Wait, that doesn’t include the separation tubes? Another $700 for a pack. You need FACS sort time? That’s $100 an hour. Oh no, it takes 4 hours to sort cells for a single experiment? Another $400. It’s easy to spend $1500 on a single experiment given the extreme costs of technology and reagents, especially when using mice. Then, after 4 years of work, you submit your study (packed into a single manuscript) for publication and the reviewers complain that you didn’t ALSO use these 4 other knockout mice, and that the study just isn’t complete enough for their beloved journal. And you (the NIH) want me to be MORE efficient? I can’t do much of anything as it is.

Anyone running an academic research laboratory should laugh (or vomit) at the mere suggestion that most are not already stretching every penny to its breaking point and beyond.

This is what is so phenomenally out of touch with Lorsch's concentration on the number of grants a PI holds. Most of us play in the full-modular space. Even for people with multiple grants that have one that managed to get funded with a substantial upgrade from full-mod, they are going to have other ones at the modular limit. And even the above-modular grants often get cut extra compared with the reductions that are put on the modular-limit awards.

The full-modular has not been adjusted with inflation. And the purchasing power is substantially eroded compared with a mere 15 years ago when they started the new budgeting approach.


[this graph depicts the erosion of purchasing power of the $250K/yr full-modular award in red and the amount necessary to maintain purchasing power in black. Inflation adjustment used was the BRDPI one]

Commenter Pinko Punko also has a great observation for Director Lorsch.

The greatest and most massive inefficiency in the system is the high probability of a funding gap for NIGMS (and all other Institute) PIs. Given that gaps in funding almost always necessitate laying off staff, and prevent long-term retention of expertise, the great inefficiency here is that expertise cannot possibly be “on demand”. I know that you are also aware that given inflation, the NIH budget never actually doubled. There has likely been a PI bubble, but it is massively deflating with a huge cost.

The lowest quantum for funding units in labs is 1. Paylines are so low, it seems the only way to attempt to prevent a gap in funding is to have an overlap at some point, because going to zero is a massive hit when research needs to grind to a halt. It is difficult to imagine that there is a large number of excessively funded labs.

While I try to put a positive spin on the Datahound analysis showing the probability of a PI becoming re-funded after losing her last NIH award, the fact is that 60% of PIs do not return to funding. A prior DataHound post showed that something between 14-30% of PIs are approximately continuously-funded (extrapolating generously here from only 8 years of data). Within these two charts there is a HUGE story of the inefficiency of trying to maintain that funding for the people who will, in the career-long run, fall into that "continuously funded" category.

This brings me to the Discussion point of the day. Lorsch's blog post is obsessed with efficiency. which he asserts comes with modestly sized research operations, indexed approximately by the number of grant awards. Three R01s being his stated threshold for excessive grants even though he cites data showing that $700K per year in direct costs is the most productive* amount of funding- i.e., three grants at a minimum.

I have a tale for you, Dear Reader. The greatly abridged version, anyway.

Once upon a time the Program Staff of ICx decided that they were interested in studies on Topic Y and so they funded some grants. Two were R01s obtained without revision. They sailed on for their full duration of funding. To my eye, there was not one single paper that resulted that was specific to the goals of Topic Y and damn little published at all. Interestingly there were other projects also funded on Topic Y. One of them required a total of 5 grant applications and was awarded a starter grant, followed by R01 level funding. This latter project actually produced papers directly relevant to Topic Y.

Which was efficient, Director Lorsch?
How could this process have been made more efficient?

Could we predict PI #3 was the one that was going to come through with the goods? Maybe we should have loaded her up with the cash and screw the other two? Could we really argue that funding all three on a shoestring was more efficient? What if the reason that the first two failed is that they just didn't have enough cash at the start to make a good effort on what was, obviously, a far from easy problem to attack.

Would it be efficient to take this scenario and give PI #3 a bunch of "people-not-projects" largesse at this point in time because she's proved able to move the scientific ball on this? Do we look at the overall picture and say "in for a penny, in for a pound"? Or do we fail to learn a damn thing and let the productive PI keep fighting against the funding cycles, the triage line and what not to keep the program going under our current approaches?

It may sound like I am leaning in one direction on this but really, I'm not. I don't know what the answer is. The distribution of success/failure across these three PIs could have been entirely different. As it happens, all three are pretty dang decent scientists. The degree to which they looked like they could kick butt on Topic Y at the point of funding their respective projects definitely didn't support the supremacy of PI#3 in the end analysis. But noobs can fail too. Sometimes spectacularly. Sometimes, as may have been the case in this little tale, people can fail because they simply haven't grown their lab operations large enough, fast enough to sustain a real program, particularly when one of the projects is difficult.

I assume, as usual, that this narrow little anecdote is worth relating because these are typical scenarios. Maybe not hugely common but not all that rare either. Common enough that a Director of an IC should be well aware.

When you have an unhealthy interest in the grant game, as do I, you notice this stuff. You can see it play out in RePORTER and PubMed. You can see it play out as you try to review competing-continuation proposals on study section. You see it play out in your sub-fields of interest and with your closer colleagues.

It makes you shake your head in dismay when someone makes assertions that they know how to make the NIH-funded research enterprise more efficient.

UPDATE: I realized that I should really should say that the third project required at least five applications since I'm going by the amended status of the *funded* awards. It is unknown if there were unfunded apps submitted. It is also unknown if either of the first two PIs tried to renew the awards and couldn't get a fundable score. I think I should also note that the third project was awarded funding in a context that triggers on at least three of the major "this is the real problem" complaints being bandied in Lorsch's comments section. The project that produced nothing at all, relevant or not, was awarded in a context that I think would align with these complainants "good" prescription. FWIW.

__
*there are huge problems even with this assessment of productivity but we'll let that slide for now.

66 responses so far

Your Grant In Review: Thought of the Day

(by drugmonkey) Jan 07 2015

I've said it repeatedly on this blog and it is true, true, true people.

In NIH grant review, the worm turns very rapidly.

The pool of individual PIs who are appropriate to apply for, and review, NIH grants in a narrow subfield is a lot smaller than most people seem to think. Or maybe this is just my field.

My guiding belief is that the reviewer of a given grant is going to have one of her own grants reviewed by the PI of the proposal she just reviewed  in very short order. Or maybe it takes a half a decade, even more. But it will happen.

And PIs do not take kindly to jackholish reviews of their proposals.

As we all know, in this day and age it takes very little in the way of reviewer behavior to totally torpedo a grant's chances. You don't even have to be obvious about it*.

This is why I try as hard as I possibly can to ground my grant reviewing in concrete reasons for criticism.

Because I want the reviewers of my proposals to do the same. And it is the right thing to do.

We have a system of grant review that is at all times precariously balanced on a knife's edge that could slide off into Mutually Assured Destruction cycles of retaliation** at any time. And I am sure it happens in some study sections and amongst some reviewers.

Mutual Professional Respect is better. It is supported one review at a time by engaging our firmest professionalism to override the biases that we cannot help but have.

 

illustration from here.

__

*This is very likely the second hardest decision I have to make about registering a Conflict of Interest in reviewing grants. I have reviewed a lot of grants of PIs who have been on the study section panels reviewing my grants. I am pretty confident this is the case for just about anyone who has served a full term appointed on a study section and probably anyone who has reviewed with full loads in over about 3 panels as ad hoc. This in and of itself cannot be a reason to recuse yourself or they would never get anything reviewed. And as my Readers know, I am very firm in my belief that it is a fool's errand to try to game out which reviewers were on your proposals and which ones were...critical.

**And, gods above, pre-emptive counter-striking.

 

2 responses so far

I really need to start one of these citation cartels....

(by drugmonkey) Jan 06 2015

From Odyssey:

It's become apparent to me that there is a group of reviewers who all display the same phenotype when it comes to their reviews. They all i) are quick to agree to review manuscripts in our common sub-sub-field, ii) submit their reviews on time, and iii) will recommend acceptance or minor revisions for all manuscripts. All.

 

On time? Suspicious that.

Did I mention that this bloc of reviewers are all strongly linked to one particular well-known member of our sub-sub-field? Former trainees, co-authors etc.

 

siiigh.

 

 

 

No responses yet

« Newer posts Older posts »