Musing on NIGMS' grant performance data

Jun 06 2011 Published by under Grant Review, Grantsmanship, NIH funding

Many of you have noted the posts over at the NIGMS feedback loop blog that discuss the number of publications arising from their awarded grants. The last one was geekalicious doozy.

Ever since the first one I've been anticipating the use of this information in grant submissions.

I figure it is most likely to arise as a response to prior critique about productivity. Especially for competing continuation proposals that were dinges for modest productivity.

But heck, maybe this should be a preemptive strike on the first submission of the competing continuation?

And of course I have also been eagerly awaiting the deployment of such information during the grant review process. I have been on panels a time or two in which soft statements about "fantastic productivity" or "surprisingly limited productivity" have been matters of discussion. Sometimes contentiously. This is a place where some normative performance data could come in handy.

5 responses so far

  • Namnezia says:

    I don't know if I'm missing something here in Berg's analysis. But the correlation between number of pubs and percentile score, although statistically significant, is hardly compelling, the correlation maybe driven by a few points. Plus there are some very productive labs that got a very low score. I'm also surprised that there are so many grants funded with percentile scores of around 30.

  • drugmonkey says:

    something on the order of 1% with percentile scores above 30? maybe 2-3% in the 25-30 range? and this was for FY 2006, remember. Doesn't seem all that surprising to me at all. Trends are here:, looks like a hard cutoff around 21-22%ile and then a bunch of low-probability pickups.

    Does it really seem so strange that this would be the degree of necessary "correction" to the peer review process based on their programmatic priorities? I think not.

    "hardly compelling" is, in fact, what I find to be so compelling about these data analyses that NIGMS has offered up. it speaks volumes about the lack of predictability of "outcome" no matter how you define it. which provides some post hoc verification of the constant drumbeat of some of us who have reviewed grants regarding the lack of objective "quality" differences within the range of applications that gets funded.

  • iGrrrl says:

    I ran focus groups of reviewers after the new review format came in in 2009 and after the first round of reviews of the 12-page format in 2010. Given that the order of NIH review criteria changed, a lot of people were nervous that putting Investigator in second place after Significance meant that the old-boys' network would become even more entrenched. I'd be curious about your experience over the last two years, but these reviewers--population and clinical sciences through very basic science--put those fears to rest. They uniformly said that on average, the investigator was discussed less now than in the past. Unless productivity was strikingly high or low, they said that the Investigator criterion didn't take up much of the discussion time.

    So how do you think the comparison would be deployed? By reviewers only, or also by applicants with competing renewals pointing out in the Personal Statement that they did better than average?

  • DrugMonkey says:

    Agreed that now as in the past, it is only the unusual cases where Investigator gets much discussion. And as per Berg and Rockey datasets, Approach and Significance are still best correlated with Overall Impact.

  • [...] Additional Reading: Your Grant in Review: Productivity on Prior Awards Musing on NIGMS' grant performance data Another Look at Measuring the Scientific Output and Impact of NIGMS Grants Productivity Metrics and [...]

Leave a Reply