Stability of funding versus the project-based funding model of the NIH

May 09 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

In response to a prior post, Morgan Price wonders about the apparent contrast of NIH's recent goal to stabilize research funding and the supposed "project-based" model.

I don't see how stability based funding is consistent with project-based funding and "funding the best science". It would be a radical change...?

NIH grants are supposed to be selected and awarded on the basis of the specific project that is proposed. That is why there is such extensive detailing of a very specific area of science, well specified Specific (not General!) Aims and a listing of specific experiments.

They are not awarded on the basis of a general program of research that seems to be promising for continued funding.

Note that there are indeed mechanisms of funding that operate on the program level to much greater extent. HHMI being one of the more famous ones of these. In program based award, the emphasis is on what the investigating team (and generally this means specifically the PI) has accomplished and published in recent years. There may be some hints about what the person plans to work on next but generally the emphasis is on past performance, rather than the specific nature of the future plan.

In the recent handwringing from NIH about how investigators that they have launched with special consideration for their newcomer status (e.g., the Early Stage Investigator PI applications can be funded at lower priority scores / percentile ranks than would be needed by an established investigator.

if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere.

"Stabilizing", Morgan Price assumes is the same thing as a radical change. It is not.

Here's the trick:

The NIH funding system has always been a hybrid which pays lip service to "project based funding" as a model while blithely using substantial, but variable, input from the "program based" logic. First off, the "Investigator" criterion of proposal review is one of 5 supposedly co-equal major criteria. The Biosketch, which details the past accomplishments and skills of the PI) is prominent in the application. This Biosketch lists both papers and prior research grant support* which inevitably leads to some degree of assessment of how productive the PI was with her prior awards. This then is used to judge the merit of the proposal that is under current review - sounds just a bit like HHMI, doesn't it?

The competing continuation application (called a Renewal application now) is another NIH beast that reveals the hybrid nature of the selection system. You are allowed to ask for no more than 5 years of support for a given project, but you can then ask for successive five year extensions via competitive application review. This type of proposal has a "Progress Report" and a list of papers resulting from the project required within the application. This, quite obviously, focuses the review in large part on the past accomplishment. Now, sure, the application also has to have a detailed proposal for the next interval. Specific Aims. Experiments listed. But it also has all of the prior accomplishments pushed into the center of the review.

So what is the problem? Why are Collins and Lauer proposing to make the NIH grant selection even more based on the research program? Well, times have changed. The figure here is a bit dated by now but I like to keep refreshing your view of it because NIH has this nasty tendency to truncate their graphs to only the past decade or so. The NIH does this to obscure just how good investigators had things in the 80s. That was when established investigators enjoyed success rates north of 40%. For all applications, not just for competing renewals. Many of the people who started their careers in those wonderful days are still very much with us, by the way. This graph shows that within a few years of the end of the doubling, the success rates for established investigators had dropped to about where the new investigators were in the 1980s. Success rates have only continued to get worse but thanks to policies enacted by Zerhouni, the established and new investigator success rates have been almost identical since 2007.
Interestingly, one of the things Zerhouni had to do was to insist that Program change their exception pay behavior. (This graph was recreated from a GAO report [PDF], page down to Page 56, PDF page 60.) It is relevant because it points to yet another way that the NIH system used to prioritize program qualities over the project qualities. POs historically were much more interested in "saving" previously funded, now unfunded, labs than they were in saving not-yet-funded labs.

Now we get to Morgan Price's point about "the best science". Should the NIH system be purely project-based? Can we get the best science one 5 year plan at a time?

I say no. Five years is not enough time to spool up a project of any heft into a well honed and highly productive gig. Successful intervals of 5 year grants depend on what has come before to a very large extent. Often times, adding the next 5 years of funding via Renewal leads to an even more productive time because it leverages what has come before. Stepping back a little bit, gaps in funding can be deadly for a project. A project that has been killed off just as it is getting good is not only not the "best" science it is hindered science. A lack of stability across the NIH system has the effect of making all of its work even more expensive because something headed off in Lab 1 (due to gaps in funding) can only be started up in Lab 2 at a handicap. Sure Lab 2 can leverage published results of Lab 1 but not the unpublished stuff and not all of the various forms of expertise locked up in the Lab 1 staff's heads.

Of course if too much of the NIH allocation goes to sinecure program-based funding to continue long-running research programs, this leads to another kind of inefficiency. The inefficiencies of opportunity cost, stagnation, inflexibility and dead-woodery.

So there is a balance. Which no doubt fails to satisfy most everyone's preferences.

Collins and Lauer propose to do a bit of re-balancing of the program-based versus project-based relationship, particularly when it comes to younger investigators. This is not radical change. It might even be viewed in part as a selective restoration of past realities of grant funded science careers.

__
*In theory the PI's grants are listed on the Biosketch merely to show the PI is capable of leading a project something like the one under review. Correspondingly, it would in theory be okay to just list the most successful ones and leave out the grant awards with under-impressive outcomes. After all, do you have to put in every paper? no. Do you have to put every bit of bad data that you thought might be preliminary data into the app? no. So why do you have to** list all of your grants? This is the program-based aspects of the system at work.

**dude, you have to. this is one of those culture of review things. You will be looked up on RePORTER and woe be to you if you try to hide some project, successful or not, that has active funding within the past three years.

14 responses so far

  • Pinko Punko says:

    I think project focused vs apparent longer term promise/coupled to productivity is one of the differences between study section cultures. I think this is major difference

  • Ola says:

    Well, your claim that the 5 scoreable criteria are equal is wrong straight off the bat.

    Significance, Innovation and Approach are project based, and approach in particular is a big score driver. Environment and Investigator are program based, and most would say less important. I've even heard it said that it's impossible to score lower than a 2 on environment - something's got to be pretty fucked for you to get a 5 (average) in that one.

    So given the imbalance in import of scoring criteria, one might argue we already have a project-centric review process. If Collins & Lauer are seeking to deemphasize Approach and tell reviewers to put more weight on Investigator and Environment, that'll be a tough sell.

  • A Salty Scientist says:

    One concern for a major tilt towards program-based review is that temporary lulls in funding (and thus productivity) would make it much harder to get back into the system. Expect more tenured deadweight. My other concern is whether program-based review would be more prone to the effects of bias during review. Now that the MIRA has been implemented, it would be useful to perform a Ginther-like study.

  • drugmonkey says:

    PiPunk: I think project focused vs apparent longer term promise/coupled to productivity is one of the differences between study section cultures.

    I think you might be quite correct. I tend to only serve on study sections populated by the same approximate super-group of reviewers so I assume my experience is limited in effective range, even if it comes from multiple sections across time. The cultures tend to cross-pollinate.

    Ola: Well, your claim that the 5 scoreable criteria are equal is wrong straight off the bat.
    I did say "supposedly co-equal".

    Salty: My other concern is whether program-based review would be more prone to the effects of bias during review.

    My gut feeling on that is the more you put the review focus on the PI, the more you are going to be subject to biases related to PI characteristics.

    Salty: temporary lulls in funding (and thus productivity) would make it much harder to get back into the system.

    A potential counter to that is that the program reputation built prior to the gap in funding might carry the new proposal forward.

  • A Salty Scientist says:

    My gut feeling on that is the more you put the review focus on the PI, the more you are going to be subject to biases related to PI characteristics.

    Agreed. I would like to see the possible disparities quantified by comparing MIRA vs. new GM R01s.

    A potential counter to that is that the program reputation built prior to the gap in funding might carry the new proposal forward.

    Sure, though reputation is more difficult to build prior to the first competitive renewal, and may hurt rather than help those in the early-to-mid career transition. Ultimately, the question is about how to best divvy up the NIH pie. Given our current funding constraints, what should the success rate be for new grants versus renewals? For early-, mid-, and late- career PIs?

  • drugmonkey says:

    C'mon Salty, you know the answer. "Whatever seems to be the thing that keeps me funded, at the level I want, with the least possible effort", of course.

  • A Salty Scientist says:

    Okay, but does NIH have an answer? They are losing sleep over the lack of balance, but are deliberately vague about what the actual balance should be.

  • drugmonkey says:

    Oh you noticed that?

  • Neuro-conservative says:

    NINDS just issued an RFA for R35's, and I notice that several other IC's have gone in that direction over the last two years, so there must be some internal sense that this is working. Would be nice to see the data.

  • A Salty Scientist says:

    R35s in NIGMS are *working* in the sense that PIs traded (hopefully) funding stability for fewer overall dollars by consolidating grants. This is consistent with NIGMS's focus (IMO) on increasing paylines at the expense of award amounts.

  • drugmonkey says:

    Are they working in the sense that those PIs are not just using the NIGMS stability to launch raids on other ICs?

  • A Salty Scientist says:

    Perhaps. Some areas of research would make such raids rare. Others could do so easily. With NIH dollars being zero sum, the actual numbers of PIs with NIGMS + other IC funding is important to know.

  • Pinko Punko says:

    When I see grants from well funded investigators that are clearly swimming and writing grants for other cultures I am always struck by the very obvious "one and done" style of grant. "We will do x and get y information" FIN. Whereas grants from others more in the dominant culture of my panel are very much spanning the "this grant is for x.y.z while opening important area for follow on studies 1,2,3". There is always questions on "what will be done with this information, what will it be used for, where is the vision"- this is the "future directions" discussion that I remember has taken place on this blog. The culture of what future directions means can be different. I think this may relate to paylines at diff institutes if grants can survive somewhere for 25-50 years, then there is idea for what they look like- renewal is a viable mindset. In other institutes with microscopic paylines, maybe this isn't the mindset of either reviewers or applicants.

  • Neuro-conservative says:

    Dude, I just checked Project Reporter and there are more than 700 active R35's! I thought it was still a small pilot. Woah.

Leave a Reply