What do you know, the NIH has not solved the revision-queing, traffic holding pattern problem with grant review.

Nov 14 2014 Published by under Fixing the NIH, NIH, NIH Careerism

Way back in 2008 I expressed my dissatisfaction with the revision-cycle holding pattern that delayed the funding of NIH grants.

Poking through my pile of assignments I find that I have three R01 applications at the A2 stage (the second and "final" amendment of a brand new proposal). Looking over the list of application numbers for the entire panel this round, I see that we have about 15% of our applications on the A2 revision.

Oi. What a waste of everyone's time. I anticipate many reviewers will be incorporating the usual smackdown-of-Program language. "This more than adequately revised application...."

I am not a fan of the NIH grant revision process, as readers will have noticed. Naturally my distaste is tied to the current era of tight budgets and expanding numbers of applications but I think the principles generalize. My main problem is that review panels use the revision process as a way of triaging the review process. This has nothing to do with selecting the most meritorious applications for award and everything to do with making a difficult process easier.

ReviewBiasGraph1The bias for revised applications is supported by funding data, round-after-round outcome in my section as well as supporting anecdotes from my colleagues who review. ... What you will quickly notice is that only about 10% of applications reviewed in normal CSR sections get funded without being revised. ... If you care to step back Fiscal Year by Fiscal Year in the CRISP [RePORTER replaced this- DM] search, you will notice the relative proportions of grants being funded at the unrevised (-01), A1 and A2 stages have trended for more revising in concert with the budget flattening. I provide an example for a single study section here ... you will notice if you review a series of closely related study sections is that the relative "preference" for giving high scores to -01, A1 and A2 applications varies somewhat between sections. This is analysis is perhaps unsurprising but we should be very clear that this does not reflect some change in the merit or value of revising applications; this is putting good applications in a holding pattern.

In the mean time, we've seen the NIH first limit revisions to 1 (the A1 version) for a few years to try to get grants funded sooner, counting from the date of first submission. In other words, to try to get more grants funded un-Amended, colloquially at the -A0 stage. After an initial trumpeting of their "success" the NIH went to silent running on this topic during a sustained drumbeat of complaints from applicants who, apparently, were math challenged and imagined that bringing back the A2 would somehow improve their chances. Then last year the NIH backed down and permitted applicants to keep submitting the same research proposal over and over, although after A1 the clock had to be reset to define the proposal as a "new" or A0 status proposal.

I have asserted all along that this is a shell game. When we were only permitted to submit one amended version, allegedly the same topic could not come back for review in "new" guise. But guess what? It took almost zero imagination to re-configure the Aims and the proposal such that the same approximate research project could be re-submitted for consideration. That's sure as hell what I did, and never ever got one turned back for similarity to a prior A1 application. The return to endless re-submission just allowed the unimaginative in on the game is all.

Type1-2000-2013 graph-2
This brings me around to a recent post over at Datahound. He's updated the NIH-wide stats for A0, A1 and (historically) A2 grants expressed as the proportion of all funded grants across recent years. As you can see, the single study section I collected the data for before both exaggerated and preceded the NIH-wide trends. It was as section that was (apparently) particularly bad about not funding proposals on the first submission. This may have given me a very severe bias..as you may recall, this particular study section was one that I submitted to most frequently in my formative years as a new PI.

It was clearly, however, the proverbial canary in the coalmine.

The new Datahound analysis shows another key thing which is that the traffic-holding, wait-your-turn behavior re-emerged in the wake of the A2 ban, as I had assumed it would. The triumphant data depictions from the NIH up through the 2010 Fiscal Year didn't last and of course those data were generated when substantial numbers of A2s were still in the system. The graph also shows taht there was a very peculiar worsening from 2012-2013 whereby the A0 apps were further disadvantaged, once again, relative to A1 apps which returns us right back to the trends of 2003-2007. Obviously the 2012-2013 interval was precisely when the final A2s had cleared the system. It will be interesting to see if this trend continues even in the face of the endless resubmission of A2asA0 era.

So it looks very much as though even major changes in permissible applicant behavior with respect to revising grants does very little. The tendency of study sections to put grants into a holding pattern and insist on revisions to what are very excellent original proposals has not been broken.

I return to my 2008 proposal for a way to address this problem:


So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding
The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1 / 15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review....a great deal of time and effort would be saved.

11 responses so far

  • qaz says:

    I fail to see how your suggestion solves the problem. The problem is that study sections want proposals to change to meet their criteria. This is an inherent problem with any review process. It is not that an A0 app that gets a 25% will be funded at the 25% in the next round, but rather that the revision brings the proposal into alignment with the study section and improves the score thus bringing it up into the 10% cutoff (sometimes!).

  • drugmonkey says:

    The problem is that study sections want proposals to change to meet their criteria.

    That is one driver perhaps but I will suggest to you that my plan has program taking that bullshit* out of the equation by just funding the just-missed apps (that would have funded historically).

    *people who think that the review kibbitzing has any detectable *effect* on the resulting science, never mind a clearly beneficial one, are delusional. IMO. and in the aggregate, of course.

  • Philapodia says:

    "my plan has program taking that bullshit* out of the equation by just funding the just-missed app"

    How does this not just set up another queue?

  • drugmonkey says:

    Because of the ~stable population of researchers issue.

  • qaz says:

    So, is your suggestion to make three levels of scores:

    0% to X% of an A0 score = funded in this round
    X% to X+Y% of an A0 score = funded in next round (automatically, not rereviewed)
    greater than Y% = try again next time?

  • Drugmonkey says:

    Roughly speaking, sure.

  • qaz says:

    Interesting. It's acknowledging the holding pattern, but reducing the uncertainty in the revision. It would probably be a lot easier to get bridge funding from other sources if you're in the funded-next-round range than currently (since currently it is only "may get funded next round"). I like it as it reduces uncertainty.

    Of course, given how unwilling NIH is to promise dollars that aren't there now, I suspect this would be a hard sell. (Who knows if we'll have a government at all in the next cycle, let alone if they'll fund NIH?)

  • drugmonkey says:

    There's no promising involved. This is policy that is handled at the ad hoc level like paylines and lack thereof.

  • Comradde PhysioProffe says:

    Maybe I'm missing something here, but all this is gonna do is take the usual payline and delay paying some of the grants that would have made that payline to the next cycle, right?

  • Grumble says:

    Yeah, I don't get it either.

    Let's say the NIBH (National Institute of Bunny Hopping) receives 1000 applications per cycle. Their goal is to fund 25% of them.

    In Cycle 1, they have enough money to fund 15% of applications. The remaining 100 applications get rolled over to the next cycle for "automatic funding".

    In Cycle 2, they again have enough money to fund only 15% of applications. Now they fund the 100 applications from Cycle 1. That leaves enough money left for only 50 applications from Cycle 2. So they roll over 200 applications to Cycle 3.

    In Cycle 3, they again have enough money to fund only 15% of applications. So they don't even have enough money to fund the 200 applications that rolled over from Cycle 2.

    And the whole system collapses.

    What am I missing? Maybe you mean that just a select few of the unfunded apps get picked for funding on the next cycle. But really, how is that different from the current system, in which borderline-scored grants that program is interested in can get picked up for up to a year or so after the first cycle, as money allows?

    Or maybe you mean that the goal the NIBH sets would be more realistic with respect to available funds: i.e., in the example above, their goal should be 16 or 17% rather than 25%. That way only a handful of grants get rolled over. But again, that isn't much different from the current system. And if only a handful of grants are affected, why bother making structural changes to the system?

  • Pinko Punko says:

    the only thing that sounds different is perhaps not pretending that someone needs to come back two cycles later (at earliest) with cosmetic changes to a grant that should have been good enough. But what DM proposes really isn't workable. Given lack of resources, there really is nothing else that will substantively improve the situation, except trying to be realistic about reviews, minimizing bullshit, recognizing StockCritiques as BS, and trying to minimize demoralizing stuff- like pretending ticky stuff means anything in improving/not improving an application.

Leave a Reply