More on the new NIH policy on grant application revisions

Oct 15 2008 Published by under Grant Review

In my brief original note on the new NIH policy limiting applicants to a single amendment (-01A1) of grants, I missed commenting on the accompanying press release. This latter overviews a little of the rationale for this step, starting with the problem

"Over the past several years, the number of applications submitted each year to NIH has doubled and the number of investigators applying for grants has increased by over 75 percent, increasing stress on the system, especially when confronted with stagnating budgets. This has led to scientists spending more time rewriting their applications and undue delays in the funding of outstanding projects," Elias A. Zerhouni, M.D., NIH Director said.

and laying out the goal.

This new policy will help ensure earlier funding of high-quality applications and improve efficiencies in the peer review system.

Oh it will, will it?


The press release discusses the background that brought us to this point:

NIH analysis indicates that an increasing number of meritorious applicants that were ultimately funded had to resubmit their applications multiple times which increased burden on applicants and reviewers alike. ... data reveals a reduction in the number of awards made to original applications. An increasing number of projects were funded only after one or more resubmissions. This trend has been increasing over recent years.

Right-o. A prior blog post overviewed my take on this dismal situation. I think my update at the end of that post is relevant:

UPDATE: PhysioProf supplies the long-term trends for all funded grants by revision status. Interesting to see how it developed over time. It reminds me of when the limitation for a maximum of two revisions of a given application were put into place in Oct 1996. One major argument was the relatively small number of applications that got funded as A2s and the further rarity of applications getting funded on additional revision. Assessing current trends by that logic should mean a return of A3 and A4 revisions, shouldn't it?

NewR01GrantsByRevision.jpg
As I mentioned before, the currently familiar limit to only two revised versions of grants was adopted in late 1996. This came at the end of a fairly obvious downward trend in the number and proportion of new grants funded without revision to the application. Looking at the new R01s listed in CRISP for the FY of 1995, I find 2591 new awards. There were 1391 (54%) funded unrevised, 749 (29%) as A1s, 314 (12%) as A2s, 112 (4.3%) as A3s, 19 (0.73%) as A4s, 5 (0.19%) as A5s and 1 (0.04%) as an A6. So some 94.7% of funded grants got there by the A2 revision stage in FY 1995. Thus, it made sense to cap off the number of permitted revisions because hardly anything was getting funded beyond A2.
The above figure also appears to show a beneficial effect of the A2-max policy on the number of grants funded at the unrevised stage. The proportion may have increased, however it is clear that the number of grants funded after the A1 and A2 revisions continued to increase as well. Was the supposedly beneficial effect a result of the policy change? or of the completely reversed NIH doubling interval?
Heinig07-NIHbudget-trend.jpeg.jpg

Figure 1. NIH Appropriations (Adjusted for Inflation in Biomedical Research) from 1965 through 2007, the President's Request for 2008, and Projected Historical Trends through 2010.

All values have been adjusted according to the Biomedical Research and Development Price Index on the basis of a standard set of relevant goods and services (with 1998 as the base year).* The trend line indicates average real annual growth between fiscal years 1971 and 1998 (3.34%), with projected growth (dashed line) at the same rate. The red square indicates the president's proposed NIH budget for fiscal year 2008, also adjusted for inflation in biomedical research.

The NIH has a bunch of data slides here, including
-the proportion of R01s funded at A0-A2 revision stages as a function of the priority score for the A0 review.
-the proportion of A0-A2 awarded R01s from 1998-2007.
So I'm still at a loss. I don't yet see where anything in the available data predicts that original, unrevised grant applications are going to have better success. Fit a trendline to the number of A0, A1 and A2 grants funded in the first figure, above. Look at the budget numbers in the second figure. The aberration in the early 2000s is associated with the budget doubling, not the change in policy with respect to amendments! In striking contrast to the 1996 change in policy, we have no data and can merely speculate about the fate of A3 or A4 revisions at present time, had there been no change back in 1996. I think that A3s would be a substantial part of the funded pool.
So what is going to happen after January 2009? Well, from a bean counter perspective, the funded A0s are going to increase. That's because people who don't get funded at the A1 stage are going to turn around the project in thinly veiled guise as their next "-01" application. Which will outcompete genuinely new -01 and -01A1 applications, on average. So the numbers may look better but the fundamental issues associated with delays to funding and wastes of the applicants' and reviewers' time will remain unchanged.
The reason is that this policy does nothing about the tendency of reviewers to focus on grantsmanship issues as an easy triage mechanism, instead of taking the "fish or cut bait" hard look at the genuinely new application the first time. The primary stage of review is the main driver here. The ameliorative measures should have accounted for the source of the problem and tried to address it more directly. The single amendment limit doesn't do this.
One of the ways the original goal could be accomplished would be through reviewer education and instruction. Put the data figures in front of all reviewers and say "Bad dog! Stop deifying revision status and grantsmanship. Focus on the underlying science. What will really be accomplished through the review process- changes in the proposal only? or actual changes in the resulting science?" As we know, however, the CSR plays some funny games when it comes to providing reviewer instruction and guidance so this sort of thing isn't going to happen...
The policy that I think would have been much better is one in which the NIH/CSR issues percentile ranks by revision status. That way ICs could simply apply a heavy bias for -01 applications over revisions to maintain whatever their target proportions of A0, A1 and A2 applications might be.
__
* NIH Office of Budget. Biomedical research and development price index (BRDPI). Bethesda, MD: National Institutes of Health, February 5, 2007. (Accessed August 16, 2007, at http://officeofbudget.od.nih.gov/UI/GDP_FromGenBudget.htm.)

12 responses so far

  • neurolover says:

    Drugmonkey -- you're focusing on trying to get reviewers to admit that something is basically good on the A0 (rather than asking for incremental tweaks so that everything reads better). That's good, but I think the other thing that needs to be done is for reviewer's to be a lot more clear about work that is basically bad, to state clearly that this A0 shouldn't come back without major revisions, that aims should be completely dropped, that they just don't perceive the individual to be qualified to do the work -- with clear suggestions of what would have to be changed to support them. I think that for mediocre grants (triaged/50%) the reviewer's are cursory in their evaluation, because they don't want to spend time trying to figure out how to fix things. So, they send the grant back, with a poor score, but no real information about how to fix it. People send those back again, hoping to win the lottery.

  • DrugMonkey says:

    with clear suggestions of what would have to be changed to support them
    I absolutely and totally disagree with this notion.
    the reviewer's are cursory in their evaluation, because they don't want to spend time trying to figure out how to fix things. So, they send the grant back, with a poor score, but no real information about how to fix it. People send those back again, hoping to win the lottery.
    It is not that reviewers are too lazy to fix a grant. It is not their job to fix a grant and nor should it be.
    One of the easiest traps to fall into as a reviewer is trying to fix someone's grant via the review process. There is almost always a kernel of interesting stuff to the reviewer (ime). So one's mind inevitably starts down the "What would be really cool is if they..." or "If it were me I would do...." paths.
    The trouble is that there are usually a whole host of ways the grant could be made better. Not one single unique fix that everyone would agree with. So why should the reviewer be making the choices? S/he should not. That is the job of the person proposing the project.
    Suppose some grant comes in proposing to examine badger digging, squirrel flying, bunny hopping, mole burrowing and chipmunk chattering...all in the first Aim. That grant is going to get nailed for over ambition, a lack of clear focus, etc. As a reviewer, sure, I happen to like bunny hopping and can generate all the reasons for why it is important, the relevance, what manipulations need to be done, what new theoretical implications are most important to pursue, what new techniques need to be applied. Etc. But that is irrelevant. What is important is what is most important to the proposing PI! It is his/her job to tell the reviewer what is the most exciting stuff to pursue...

  • Lorax says:

    I concur with DM, the reviewer should not "fix" grants. It seems that many reviewers have this mindset though. Case in point, I had a grant which during the -01 to A2 version most of one aim was completed and published. Being honest, I revised that aim and focused on pursuing what we had learned. I thought this would be a good thing "Look at our progess!" and noted this in my response to the reviewers. In the A2 critique reviewer #1 stated that "Aim 2 contains all new experiments and approaches that previous reviewers did not have a chance to improve." The grant was not funded (was nominated for a select award) and the new submission on this project was funded at the 4.5 percentile.
    So, reviewers please dont "fix" my damn grants! Hell, I won't fix yours either.

  • neurolover says:

    Hmh -- I see your point about it not being the reviewer's job to fix the grant, but, if reviewers send back grants that they don't like very much with vague statements like "this is an interesting experiment," they'll get the grant back again, possibly with changes that do nothing to fix it in the reviewer's eyes, and everyone (the writer, the reviewer, and everyone else) wastes a lot of time. That wastes everyone's time. Perhaps the problem is that unlike paper reviews at the high profile journals (which grant reviews are, in the number of grants they turn away), grants don't get rejected on the first submission. Right now everyone gets two chances, and changing that to one doesn't seem like it really changes anything. Perhaps we need something like "reject" "reject with possibility of resubmission", "revise and resubmit", instead of every grant being returned with "revise and resubmit" (and with no real information on what needs to be revised for successful resubmission).

  • becca says:

    When triage was first described to me, I was told you get no feedback. Is that true? It seems to me like it would be kind, and involve only 15 seconds of reviewer time, if you at least got something like "grantsmanship so poor scientific aims are uninteligible" or "aims hopelessly too broad" or "structurally fatally flawed/relient upon the first aim working a certain way" or something.
    Of course, this is from the perspective of someone with no genuine R01 experience- I don't know if it would be useful to other sorts.
    Something else I was wondering- how thin can your veil be? How different does a proposal have to be to count as new?

  • When triage was first described to me, I was told you get no feedback. Is that true?

    No, it is completely false. You get detailed written reviews prepared by the assigned reviewers, same as if you were not triaged. The only difference between triaged and non-triaged grants is that those that are triaged do not get discussed by the convened study section review panel and do not get assigned priority scores.

  • neurolover says:

    You get the summary statements even if a grant is triaged. How detailed they are depends on the reviewer, because they are given less attention by the study section/program officer. It's my opinion that triaged grants don't usually get as detailed summary statements (but, I think that really does vary based on the reviewer).
    And, I've never seen anything so clear as "grantsmanship so poor scientific aims are uninteligible" -- usually people say something closer to "descriptions and motivations for portions of the aims were not completely clear." What I see a lot more of is "damning with faint praise."

  • neurolover says:

    Oh, and what kind of information can you get from your program officer if your grant is triaged? My assumption is not very much, 'cause it was never discussed, and just scored by the reviewers. Anyone know the answer?

  • Becca says:

    Thanks Tongzhi!
    (in defense of my mentors- I was probably not so much misinformed as misremembering/confused/mixing up NIH vs. other grants)
    Now that you've described it, I know I've been told how the process works! Still, I'm not suprised with what neurolover says, that triaged grants sometimes do come back without much useful feedback.

  • DrugMonkey says:

    And, I've never seen anything so clear as "grantsmanship so poor scientific aims are uninteligible" -- usually people say something closer to "descriptions and motivations for portions of the aims were not completely clear." What I see a lot more of is "damning with faint praise."
    Still, I'm not suprised with what neurolover says, that triaged grants sometimes do come back without much useful feedback.
    Some of this is per comment #2 above, it is not the reviewers job to fix the grant. Some of it is because there is an expectation to write nicely. To be constructive and evaluative instead of critical. This explains a lot of short hand and stock-critique writing.
    I'll again offer this post which is my attempt to communicate that it behooves the applicant to learn to interpret Summarystatementish and Critiquese.
    Some colleagues were recently arguing that showing someone your grant and/or summary statement is as intimate as showing your underwear....well, hate to break it down but if you wanna be a successful grant writing PI you gotta get yourself to nekkid-hottub level at the least, nekkid-sauna level is better and full-on stripper if you can. The more you can get advice and input on what your summary statement is really saying, the better.
    what kind of information can you get from your program officer if your grant is triaged?
    Depends. There is the (very) rare case in which a grant is discussed and the panel then decides to still triage the proposal. There the PO would have the usual info.
    Otherwise, the PO will potentially have some general information about that study section and that round available depending on how alert s/he is and how experienced s/he is. In theory, a PO who has long experience with that particular study section could give you some hints about how things generally go in that specific section. As in "they really obsess over innovation" or "If you don't have anything in there about public health relevance I can guarantee you at least five of the standing members are going to savage you" or "this panel really, really loves them some revisions so just suck it up and revise".
    The unknown to me is whether or not the POs are told who the assigned reviewers are. Obviously if it is discussed they know who the reviewers are. I should note that at least on my section, a given reviewer is told who the other reviewers are on his/her assigned proposals but not for the rest of the proposals. So unless the reviewer puts his/her name on the initial draft critique (some do, dunno why) other reviewers do not know who reviewed a triaged proposal. One might think this is extended to the POs but I don't really know for sure...

  • Becca says:

    ...you gotta get yourself to nekkid-hottub level at the least, nekkid-sauna level is better and full-on stripper if you can.
    Seems to me this would be a good situation for "I'll show you mine if you show me yours"
    😉

  • BikeMonkey says:

    http://www.nature.com/news/2008/081015/full/455841a.html

    The NIH estimates that the move will reduce the number of applications by up to 5,000 -- welcome news as it struggles to evaluate about 55,000 applications this year.
    ...
    Toni Scarpa, director of the NIH Center for Scientific Review, says the new policy will remove delays to funding the most worthy projects, and calls it "a moral imperative".

Leave a Reply