Way back in 2008 I expressed my dissatisfaction with the revision-cycle holding pattern that delayed the funding of NIH grants.
Poking through my pile of assignments I find that I have three R01 applications at the A2 stage (the second and "final" amendment of a brand new proposal). Looking over the list of application numbers for the entire panel this round, I see that we have about 15% of our applications on the A2 revision.
Oi. What a waste of everyone's time. I anticipate many reviewers will be incorporating the usual smackdown-of-Program language. "This more than adequately revised application...."
I am not a fan of the NIH grant revision process, as readers will have noticed. Naturally my distaste is tied to the current era of tight budgets and expanding numbers of applications but I think the principles generalize. My main problem is that review panels use the revision process as a way of triaging the review process. This has nothing to do with selecting the most meritorious applications for award and everything to do with making a difficult process easier.
The bias for revised applications is supported by funding data, round-after-round outcome in my section as well as supporting anecdotes from my colleagues who review. ... What you will quickly notice is that only about 10% of applications reviewed in normal CSR sections get funded without being revised. ... If you care to step back Fiscal Year by Fiscal Year in the CRISP [RePORTER replaced this- DM] search, you will notice the relative proportions of grants being funded at the unrevised (-01), A1 and A2 stages have trended for more revising in concert with the budget flattening. I provide an example for a single study section here ... you will notice if you review a series of closely related study sections is that the relative "preference" for giving high scores to -01, A1 and A2 applications varies somewhat between sections. This is analysis is perhaps unsurprising but we should be very clear that this does not reflect some change in the merit or value of revising applications; this is putting good applications in a holding pattern.
In the mean time, we've seen the NIH first limit revisions to 1 (the A1 version) for a few years to try to get grants funded sooner, counting from the date of first submission. In other words, to try to get more grants funded un-Amended, colloquially at the -A0 stage. After an initial trumpeting of their "success" the NIH went to silent running on this topic during a sustained drumbeat of complaints from applicants who, apparently, were math challenged and imagined that bringing back the A2 would somehow improve their chances. Then last year the NIH backed down and permitted applicants to keep submitting the same research proposal over and over, although after A1 the clock had to be reset to define the proposal as a "new" or A0 status proposal.
I have asserted all along that this is a shell game. When we were only permitted to submit one amended version, allegedly the same topic could not come back for review in "new" guise. But guess what? It took almost zero imagination to re-configure the Aims and the proposal such that the same approximate research project could be re-submitted for consideration. That's sure as hell what I did, and never ever got one turned back for similarity to a prior A1 application. The return to endless re-submission just allowed the unimaginative in on the game is all.
This brings me around to a recent post over at Datahound. He's updated the NIH-wide stats for A0, A1 and (historically) A2 grants expressed as the proportion of all funded grants across recent years. As you can see, the single study section I collected the data for before both exaggerated and preceded the NIH-wide trends. It was as section that was (apparently) particularly bad about not funding proposals on the first submission. This may have given me a very severe bias..as you may recall, this particular study section was one that I submitted to most frequently in my formative years as a new PI.
It was clearly, however, the proverbial canary in the coalmine.
The new Datahound analysis shows another key thing which is that the traffic-holding, wait-your-turn behavior re-emerged in the wake of the A2 ban, as I had assumed it would. The triumphant data depictions from the NIH up through the 2010 Fiscal Year didn't last and of course those data were generated when substantial numbers of A2s were still in the system. The graph also shows taht there was a very peculiar worsening from 2012-2013 whereby the A0 apps were further disadvantaged, once again, relative to A1 apps which returns us right back to the trends of 2003-2007. Obviously the 2012-2013 interval was precisely when the final A2s had cleared the system. It will be interesting to see if this trend continues even in the face of the endless resubmission of A2asA0 era.
So it looks very much as though even major changes in permissible applicant behavior with respect to revising grants does very little. The tendency of study sections to put grants into a holding pattern and insist on revisions to what are very excellent original proposals has not been broken.
I return to my 2008 proposal for a way to address this problem:
So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding
The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1 / 15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review....a great deal of time and effort would be saved.