Archive for the 'NIH funding' category


Dec 07 2018 Published by under NIH Careerism, NIH funding

There was recently a little twitter thread on the obligations scientists have to work on finishing up projects, particularly once the formal association with that project has expired. This made me recall some thoughts I had from the PI perspective on finishing projects one has been funded to work on. So I was primed when these thoughts occurred today.

Once upon a time I ran across an interesting RFA from the NIH that seemed highly targeted for one particular lab. Oh, don't get me wrong, many of them do seem this way. But this one was particularly.....specific. The funding was generous, we're talking BigMech territory here. And while I could theoretically pull together a credible proposal with the right collaborators, I really wasn't the right person. Meh, it was a time of transition in my lab and in my science and I didn't have anything else to write at the time so....I put a proposal in.

And did not get triaged! whoohoo! Really, I should have been triaged. Clearly the field of proposals was really, really weak. As in not even credible, I am assuming. You know my mantra....always submit something that hits the minimal standards for a reviewer to get behind it. So mine was credible. I just didn't really have the specific chops on that area of work to support a good score.

The top-suspect lab got the grant, of course.

Which again, was very specific as to the topic and goals. Most interestingly it was for "Approach X" to the general idea when this laboratory had only vaguely flirted with X in their earliest going and had settled upon Approach Y.

I kept a bit of a weather eye on what they were doing with the funds, as you can imagine. After all I was interested in the topic enough to put in a grant proposal.

The Approach Y papers kept on coming. Nary an Approach X paper to be seen. I saw one very limited poster at a meeting once in the middle of their funding interval. And then at the end those data were published, with not much ground traveled. I did mention it was a BigMech, right?

That was a fair bit ago and apparently the laboratory got a second BigMech, a different one, but again clearly devoted to Approach X on the question at hand. And it was a few years in by the time I noticed.

I hadn't paid the strictest attention but at one point I got a manuscript to review from the laboratory. "Aha", I thinks to meself, "this is going to really kick ass after so much funding and so few papers". Both BigMechs are acknowledged, so we're on solid ground assuming what paid for this work.

Of course the manuscript was underwhelming. Not that anyone else is doing amazingly better solving this particular question. But the point is that they had copious funding over at least seven years and have barely published anything relevant to Approach X take on the topic. This dataset is relevant but far from earthshaking. It just retreads territory published five years before it appeared. And the functional outcome of this work is no better than one-off papers from at least three other groups who have not enjoyed such amazing levels of funding.

Now, the laboratory has published other work. Just mostly on Approach Y which was not the goal according to the original FOA and going by the abstracts on RePORTER.

It's possible that the group has been working away like beavers on the topic and just never had any positive results. It is not totally foreign to see a laboratory that will only publish big positive hits. But I do not see that in their overall history. There does not seem to be a reluctance to squeak out small papers with one thin asterix showing positive effects. So I have to conclude they just aren't really working on Approach X much at all.

In a way this makes me angry. I still feel like the bargain we make is to take a serious stab at working on what the grant says. Yeah, yeah a grant is not a contract but in this case there were fairly focused calls for applications. Someone who was actually planning to do the work was not funded. Perhaps more than one someone.

[record scratch]

Wait a minnit dude, doesn't this apply to you!???!!!!


It has.

I mean, not in the sense of highly targeted original FOA but sure, I've had my intervals of what looks like pretty good funding on a topic and have batted below expectations on publications. Of course I know how hard we worked to get something going on the project as proposed. I know that we did the science as best we could. I know what headaches interfered and I know what data exist that I mean to publish* one day. I have certainly credited that project with other somewhat-related side project work to the extent it was kosher.

But a critic could go through this same thought process when they get my manuscript to review.

"That's IT????!!!!!"

And if they have been less fortunate than I in the NIH grant game, the odds are they are going to rant about how unfair everything is and how I must have sweet political connections and the Systemz is Brokenz.

They aren't wrong.

And they aren't exactly right, either.

These scenarios play out all the time in NIH extramural funding land. I don't know how you do real science and not come up blank once in awhile. Occasionally this is going to play out to the tune of an entire BigMech, right? Maybe?

Otherwise we are only doing utterly safe and conservative science. Is that correct?

*This is where it intersects with the twitter discussion. Writing up old projects, exciting or not, may be hindered if the staff who worked on it have now departed. I'm not going to sidetrack but obviously the obligation to render published science for grant dollars awarded falls heavily on the PI and less so on the trainees. But still, if you have been paid to work on the project, there is an obligation to produce on that project.

9 responses so far

Science trainees and the NIH Doubling

Oct 14 2018 Published by under NIH, NIH Careerism, NIH funding

Most of my Readers are aware that the NIH budget allocation from Congress plays a big role in our careers and how they have unfolded over years or even decades.

One of the touchstone events is the "doubling interval" in which the Congress committed to double the size of the NIH budget over the course of 10 years. This resulted in large year-over-year increases from 1994 to 2003 after which the NIH budget was essentially flatlined for the next dozen years. This is in current dollars so, as we've discussed on this blog, inflation means that the NIH budget shrunk in purchasing power over the post-doubling interval. ARRA stimulus funding in Fiscal Years 2009 and 2010 were mere blips, which alleviated acute pain but did little for the longer term issues.

One interesting thing about being me is that my NIH geekery has limits and I don't always appreciate everything fully the first time I see it. Sometimes it is because I am, like many of us, blinded by a sort of confirmation bias. I am by no means alone in seeing confirmation of my positions in data and statistics about the NIH that others view entirely differently. Sometimes it is because an earlier trend gets fixed in my minds and I don't always see the way five or ten additional years worth of data may change my thinking.

One of these issues is related to the data showing the production of PhDs each year by US domestic degree granting institutions. Our good blog friend Michael Hendricks posted this graph on the Twitters today. He, as I usually do, interprets this to show the evils of the doubling interval when it comes to regulating the size of the workforce. The first half of the doubling interval did indeed correlate with a steep increase in the rate of annual biomedical PhD generation without a similar trend for doctoral production in other areas. I typically use this steep increase in PhD production as part of my argument that our current stress in staying funded is related to too many mouths at the trough. We generated all these PhDs during the late 90s onward and gee, shocker, lots of these people want faculty level jobs competing for a fixed amount of NIH funding. Retirement and death of the existing pool of NIH PIs has not kept pace with this production, from what I can tell.

My usual eye tends to skip over a couple of key facts. The steep increase in PhD production started several years before the doubling even started. It was in full swing during a time just prior to the doubling passing Congress when the faculty were crying loudly about how horrible the NIH grant getting had become. I know because I was in graduate school in there somewhere.

The year-over-year PhD production actually stabilized during the latter half of the doubling interval. This was followed after the NIH budget flatlined by another increase in the rate of year-over-year PhD production!

So I think Michael Hendricks' current view, and my prior view, on the meaning of the PhD production numbers and how it relates to major changes in the NIH budget allocation cannot be true.

Sustained increase in the NIH budget actually produced stability in PhD numbers. It was the stressful times in which NIH grant getting was perceived to be ruinous and terrible that led to increased numbers of PhDs being generated by the US doctoral institutions.

There are probably many reasons for this relationship. I would not be surprised in the least if bad general economic times led more people to want to go to grad school and booming economic times led to fewer. These general trends are very likely related to the willingness of the Congress to give NIH more or less money.

But I would also not be surprised in the least if stressful grant funding conditions led the professors who participate in graduate training to be even more fond of this source of cut rate labor than they are in flush times.

11 responses so far

The R01 Doesn't Even Pay for Revisions

Sep 11 2018 Published by under Academics, Careerism, NIH, NIH Careerism, NIH funding

Hard charging early career Glam neuroscientist Kay Tye had an interesting claim on the twitters recently.

The message she was replying to indicated that a recent request for manuscript revisions was going to amount to $1,000, making Kay's costs anywhere from $100,000 to $10,000,000. Big range. Luckily she got more specific.

One Million Dollars.

For manuscript revisions.

Let us recap.

The bog standard NIH "major award" is the R01, offered most generically in the 5-year, $250,000 direct cost per year version. $1,250,000 for a five year major (whoa, congrats dude, you got an R01! You have it made!) award.

Dr. Tye has just informed us that it is routine for reviewers to ask for manuscript (one. single. manuscript.) revisions that amount to $1,000,000 in cost.

Ex-NIGMS Director Jeremy Berg cheer-led (and possibly initiated) a series of NIH analyses and data dumps showing that something on the order of 7 (+/- 2) published papers were expected from each R01 award's full interval of funding. This launched a thousand ships of opinionating on "efficiency" of NIH grant award and how it proves that one grant for everyone is the best use of NIH money. It isn't.

I have frequently hit the productivity zone identified in NIGMS data...and had my competing revisions criticized severely for lack of productivity. I have tripled this on at least one interval of R01 funding and received essentially no extra kudos for good productivity. I would be highly curious to hear from anyone who has had a 5 year interval of R01 support described as even reasonably productive with one paper published.

Because even if Dr. Tye is describing a situation in which you barely invest in the original submission (doubtful), it has to be at least $250,000, right? That plus $1,000,000 in revisions and you end up with at best 1 paper per interval of R01 funding. And it takes you five years to do it.

The Office of Extramural Research showed that the vast majority of NIH-funded PIs hold 1 (>70%) or at most 2 (cumulative >90%) major awards at a time.

NIGMS (and some of my fellow NIH-watchers) have been exceptionally dishonest about interpreting the the efficiency data they produce and slippery as otters about resulting policy on per-PI dollar limitations. Nevertheless, one interpretation of their data is that $750,000 in direct costs per year is maximally efficient. Merely mentioning that an honest interpretation of their data ends up here (and reminding that the NIGMS policy for greybeard insiders was in fact to be about $750,000 per year) usually results in the the sound of sharpening stone on steel farm implements and the smell of burning pitch.

Even that level of grant largesse ("largesse") does not pay for the single manuscript revisions that Professor Tye describes within a single year.

I have zero reason to doubt Professor Tye's characterization, I will note. I am familiar with how Glam labs operate. I am familiar with the circle jerk of escalating high-cost "necessary" experimental demands they gratify each other with in manuscript review. I am familiar with the way extremely well funded labs use this bullshit as a gatekeeping function to eliminate the intellectual competition. I am perhaps overly familiar with Glam science labs in which postdocs blowing $40,000 on single fucked up experiments (because they don't bother to think things through, are sloppy or are plain wasteful) is entirely routine.

The R01 does not pay for itself. It does not pay for the expected productivity necessary to look merely minimally productive, particularly when "high impact publications" are the standard.

But even that isn't the point.

We have this exact same problem, albeit at less cost, all down the biomedical NIH-funded research ranks.

I have noted more than once on this blog that I experience a complete disconnect between what is demanded in peer review of manuscripts at a very pedestrian level of journal, the costs involved and the way R01s that pay for those experiments are perceived come time for competitive renewal. Actually, we can generalize this to any new grant as well, because very often grant reviewers are looking at the productivity on entirely unrelated awards to determine the PI's fitness for the next proposal. There is a growing disconnect, I claim, between what is proposed in the average R01 these days and what it can actually pay to accomplish.

And this situation is being created by the exact same super-group of peers. The people who review my grants also review my papers. And each others'. And I review their grants and their manuscripts.

And we are being ridiculous.

We need to restore normalcy and decency in the conduct of this profession. We need to hold the NIH accountable for its fantasy policy that has reduced the spending capability of the normal average grant award to half of what it was a mere twenty years ago. And for policies that seek to limit productive labs so that we can have more and more funded labs who are crippled in what they can accomplish.

We need to hold each other accountable for fantasy thinking about how much science costs. R01 review should return to the days when "overambitious" meant something and was used to keep proposed scope of work minimally related to the necessary costs and the available funds. And we need to stop demanding an amount of work in each and every manuscript that is incompatible with the way the resulting productivity will be viewed in subsequent grant review.

We cannot do anything about the Glam folks, they are lost to all decency. But we can save the core of the NIH-funded biomedical research enterprise.

If you will only join me in a retreat from the abyss.

32 responses so far

Of course the NIH can strong-arm Universities if they really want to

I think the NIH should more frequently use the power of the purse to change the behavior of Universities. I expressed this recently in the context of a Congressional demand for information from the NIH Director on the NIH oversight of the civil rights obligations of their awardee institutions. I have probably expressed this in other contexts as well. Before the invention of the K99/R00 one saw handwringing from the NIH about how Universities wouldn't hire less experienced PhDs and this was the RealProblem accounting for the time-to-first-R01 stat. My observation at the time was that if the NIH was serious they could just ask Universities for their hiring stats and tell ones that didn't hire enough young faculty that they were going to go to the back of the line for any special consideration awards.

This could also apply to occasionally bruited NIH concerns about women, underrepresented groups and other classes of folks not typically treated well by Universities. Exhibit lower than average hiring or promoting of women or URM professors? You go to the back of the special consideration line, sorry.

My suggestions are typically met with "we can't" when I am talking to various NIH Program types and various grades of "they can't" when talking to extramural folks about it.

Of course the NIH can.

They already do.

One very specific case of this is the K99/R00 award when it comes time for administrative review of the R00 phase hiring package. If the NIH finds the proposed hiring package to be deficient they can refuse to award the R00. I have no idea how many times this has been invoked. I have no idea how many times an initial offer of a University has been revised upwards because NIH program balked at the initial offer. But I am confident it has happened at least once. And it is certainly described extensively as a privilege the NIH reserves to itself.

A more general case is the negotiation of award under unusual circumstances. The NIH allows exemptions from the apparent rules all the time. (I say "apparent" because of course NIH operates within the rules at all times. There are just many rules and interpretations of them, I suspect.) They can, and do, refuse to make awards when an original PI is unavailable and the Uni wants to substitute someone else. They cut budgets and funded years. They can insist that other personnel are added to the project before they will fund it. They will pick up some but not other awards with end of year funds based on the overhead rate.

These things have a manipulating effect on awardee institutions. It can force them to make very specific and in some cases costly (startup packages, equipment, space) changes from what they would otherwise have done.

This is NIH using the power of the purse to force awardee institutions to do things. They have this power.

So the only question is whether they choose to use it, for any particular goal that they claim to be in favor of achieving.

2 responses so far

GAO report shows the continued NIH grant funding disparity for underrepresented PIs

Aug 15 2018 Published by under NIH, NIH Careerism, NIH funding, Underrepresented Groups

A comment from pielcanelaphd on a prior post tips us off to a new report (PDF) from the General Accountability Office, described as a report to Congressional Committees.

The part of the report that deals with racial and ethnic disparities is mostly recitation of the supposed steps NIH has been taking in the wake of the Ginther report in 2011. But what is most important is the inclusion of Figure 2, an updated depiction of the funding rate disparity.
GAO-18-545:NIH RESEARCH Action Needed to Ensure Workforce Diversity Strategic Goals Are Achieved

These data are described mostly as the applicant funding rate or similar. The Ginther data focused on the success rate of applications from PIs of various groups. So if these data are by applicant PI and not by applications, there will be some small differences. Nevertheless, the point remains that things have not improved and PIs from underrepresented ethnic and racial groups experience a disparity relative to white PIs.

No responses yet

NIH policy on A2 as A0 that I didn't really appreciate.

Jul 26 2018 Published by under Grantsmanship, NIH, NIH funding

The NOT-OD-18-197 this week seeks to summarize policy on the submission of revised grant applications that has been spread across multiple prior notices. Part of this deals with the evolved compromise where applicants are only allowed to submit a single formal revision (the -xxA1 version) but are not prohibited from submitting a new (-01, aka another A0 version) one with identical content, Aims, etc.

Addendum A emphasizes rules for compliance with Requirements for New Applications. The first one is easy. You are not allowed an extra Introduction page. Sure. That is what distinguishes the A1, the extra sheet for replying.

After that it gets into the weeds. Honestly I would have thought this stuff all completely legal and might have tried using it, if the necessity ever came up.

The following content is NOT allowed anywhere in a New A0 Application or its associated components (e.g., the appendix, letters of support, other attachments):

Introduction page(s) to respond to critiques from a previous review
Mention of previous overall or criterion scores or percentile
Mention of comments made by previous reviewers
Mention of how the application or project has been modified since its last submission
Marks in the application to indicate where the application has been modified since its last submission
Progress Report

I think I might be most tempted to include prior review outcome? Not really sure and I've never done this to my recollection. Mention of prior comments? I mean I think I've seen this before in grants. maybe? Some sort of comment about prior review that did not mean the revision series.

Obviously you can accomplish most of this stuff within the letter of the law by not making explicit mention or marking of revision or of prior comments. You just address the criticisms and if necessary say something about "one might criticize this for...but we have proposed....".

The Progress Report prohibition is a real head scratcher. The Progress Report is included as a formal requirement with a competing continuation (renewal in modern parlance) application. But it has to fit within the page limits, unlike either an Introduction or a List of Publications Resulting (also an obligation of renewals apps) which gets you extra pages.

But the vast majority of NIH R01s include a report on the progress made so far. This is what is known as Preliminary Data! In the 25 page days, I tended to put Preliminary Data in a subsection with a header. Many other applications that I reviewed did something similar. It might as well have been called the Progress Report. Now, I sort of spread Preliminary Data around the proposal but there is a degree to which the Significance and Innovation sections do more or less form a report on progress to date.

There are at least two scenarios where grant writing behavior that I've seen might run afoul of this rule.

There is a style of grant writer that loves to place the proposal in the context of their long, ongoing research program. "We discovered... so now we want to explore....". or "Our lab focuses on the connectivity of the Physio-Whimple nucleus and so now we are going to examine...". The point being that their style almost inevitably requires a narrative that is drawn from the lab as a whole rather than any specific prior interval of funding. But it still reads like a Progress Report.

The second scenario is a tactical one in which a PI is nearing the end of a project and chooses to continue work on the topic area with a new proposal rather than a renewal application. Maybe there is a really big jump in Aims. Maybe it hasn't been productive on the previously proposed Aims. Maybe they just can't trust the timing and surety of the NIH renewal proposal process and need to get a jump on the submission date. Given that this new proposal will have some connection to the ongoing work under a prior award, the PI may worry that the review panel will balk at overlap. Or at anticipated overlap because they might assume the PI will also be submitting a renewal application for that existing funding. In the old days you could get 2 or 3 R01 more or less on the same topic (dopamine and stimulant self-administration, anyone?) but I think review panels are unkeen on that these days. They are alert to signs of multiple awards on too-closely-related topics. IME anyway. So the PI might try to navigate the lack of overlap and/or assure the reviewers that there is not going to be a renewal of the other one in some sort of modestly subtle way. This could take the form of a Progress Report. "We made the following progress under our existing R01 but now it is too far from the original Aims and so we are proposing this as a new project.." is something I could totally imagine writing.

But as we know, what makes sense to me for NIH grant applications is entirely beside the point. The NOT clarifies the rules. Adhere to them.

7 responses so far

Your Grant in Review: Scientific Premise

Jul 03 2018 Published by under NIH Careerism, NIH funding

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was...great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don't know about all of y'all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don't need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

2 responses so far

On PO/PI interactions to steer the grant to the PI's laboratory

Jun 18 2018 Published by under Alcohol, NIH, NIH funding

There has been a working group of the Advisory Committee to the Director (of NIH, aka Francis Collins) which has been examining the Moderate Alcohol and Cardiovascular Health Trial in the wake of a hullabaloo that broke into public earlier this year. Background on this from Jocelyn Kaiser at Science, from the NYT, and the WaPo. (I took up the sleazy tactics of the alleged profession of journalism on this issue here.)

The working group's report is available now [pdf].

Page 7 of that report:

There were sustained interactions (from at least 2013) between the eventual Principal Investigator (PI) of the MACH trial and three members of NIAAA leadership prior to, and during development of, FOAs for planning and main grants to fund the MACH trial

These interactions appear to have provided the eventual PI with a competitive advantage not available to other applicants, and effectively steered funding to this investigator

Page 11:

NIH Institutes, Centers, and Offices (ICOs) should ensure that program staff do not inappropriately provide non-public information, or engage in deliberations that either give the appearance of, or provide, an advantage to any single, or subset of, investigator(s)

The NIH should examine additional measures to assiduously avoid providing, or giving the appearance of providing, an advantage to any single, or subset of, investigator(s) (for example, in guiding the scientific substance of preparing grant applications or responding to reviewer comments)

The webcast of the meeting of the ACD on Day 2 covers the relevant territory but is not yet available in archived format. I was hoping to find the part where Collins apparently expressed himself on this topic, as described here.

In the wake of the decision, Collins said NIH officials would examine other industry-NIH ties to make sure proper procedures have been followed, and seek out even “subtle examples of cozy relationships” that might undermine research integrity.

When I saw all of this I could only wonder if Francis Collins is familiar with the RFA process at the NIH.

If you read RFAs and take the trouble to see what gets funded out of them you come to the firm belief that there are a LOT of "sustained interactions" between the PO(s) that are pushing the RFA and the PI that is highly desired to be the lucky awardee. The text of the RFAs in and of themselves often "giv(e) the appearance of providing, and advantage to any single, or subset of, investigator(s)". And they sure as heck provide certain PIs with "a competitive advantage not available to other applicants".

This is the way RFAs work. I am convinced. It is going to take on huge mountain of evidence to the contrary to counter this impression which can be reinforced by looking at some of the RFAs in your closest fields of interest and seeing who gets funded and for what. If Collins cares to include failed grant applications from those PIs that lead up to the RFA being generated (in some cases) I bet he finds that this also supports the impression.

I really wonder sometimes.

I wonder if NIH officialdom is really this clueless about how their system works?

...or do they just have zero compunction about dissembling when they know full well that these cozy little interactions between PO and favored PI working to define Funding Opportunity Announcements are fairly common?

Disclaimer: As always, Dear Reader, I have related experiences. I've competed unsuccessfully on more than one occasion for a targeted FOA where the award went to the very obvious suspect lab. I've also competed successfully for funding on a topic for which I originally sought funding under those targeted FOAs- that takes the sting out. A little. I also suspect I have at least once received grant funding that could fairly be said to be the result of "sustained interactions" between me and Program staff that provided me "a competitive advantage" although I don't know the extent to which this was not available to other PIs.

10 responses so far

The Purchasing Power of the NIH Grant Continues to Erode

It has been some time since I made a figure depicting the erosion of the purchasing power of the NIH grant so this post is simply an excuse to update the figure.

In brief, the NIH modular budget system used for a lot of R01 awards limits the request to $250,000 in direct costs per year. A PI can ask for more but they have to use a more detailed budgeting process, and there are a bunch of reasons I'm not going to go into here that makes the "full-modular" a good starting point for discussion of the purchasing power of the typical NIH award.

The full modular limit was put in place at the inception of this system (i.e., for applications submitted after 6/1/1999) and has not been changed since. I've used the FY2001 as my starting point for the $250,000 and then adjusted it in two ways according to the year by year BRDPI* inflation numbers. The red bars indicate the reduction in purchasing power of a static $250,000 direct cost amount. The black bars indicate the amount the full-modular limit would have to be escalated year over year to retain the same purchasing power that $250,000 conferred in 2001.

(click to enlarge)

The executive summary is that the NIH would have to increase the modular limit to $450,000 $400,000** per year in direct costs for FY2018 in order for PIs to have the same purchasing power that came with a full-modular grant award in 2001.
*The BRDPI inflation numbers that I used can be downloaded from the NIH Office of Budget. The 2017 and 2018 numbers are projected.

**I blew it. The BRDPI spreadsheet actually projects inflation out to 2023 and I pulled the number from 2021 projection. The correct FY2018 equivalent is $413,020.

7 responses so far

Stability of funding versus the project-based funding model of the NIH

May 09 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

In response to a prior post, Morgan Price wonders about the apparent contrast of NIH's recent goal to stabilize research funding and the supposed "project-based" model.

I don't see how stability based funding is consistent with project-based funding and "funding the best science". It would be a radical change...?

NIH grants are supposed to be selected and awarded on the basis of the specific project that is proposed. That is why there is such extensive detailing of a very specific area of science, well specified Specific (not General!) Aims and a listing of specific experiments.

They are not awarded on the basis of a general program of research that seems to be promising for continued funding.

Note that there are indeed mechanisms of funding that operate on the program level to much greater extent. HHMI being one of the more famous ones of these. In program based award, the emphasis is on what the investigating team (and generally this means specifically the PI) has accomplished and published in recent years. There may be some hints about what the person plans to work on next but generally the emphasis is on past performance, rather than the specific nature of the future plan.

In the recent handwringing from NIH about how investigators that they have launched with special consideration for their newcomer status (e.g., the Early Stage Investigator PI applications can be funded at lower priority scores / percentile ranks than would be needed by an established investigator.

if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere.

"Stabilizing", Morgan Price assumes is the same thing as a radical change. It is not.

Here's the trick:

The NIH funding system has always been a hybrid which pays lip service to "project based funding" as a model while blithely using substantial, but variable, input from the "program based" logic. First off, the "Investigator" criterion of proposal review is one of 5 supposedly co-equal major criteria. The Biosketch, which details the past accomplishments and skills of the PI) is prominent in the application. This Biosketch lists both papers and prior research grant support* which inevitably leads to some degree of assessment of how productive the PI was with her prior awards. This then is used to judge the merit of the proposal that is under current review - sounds just a bit like HHMI, doesn't it?

The competing continuation application (called a Renewal application now) is another NIH beast that reveals the hybrid nature of the selection system. You are allowed to ask for no more than 5 years of support for a given project, but you can then ask for successive five year extensions via competitive application review. This type of proposal has a "Progress Report" and a list of papers resulting from the project required within the application. This, quite obviously, focuses the review in large part on the past accomplishment. Now, sure, the application also has to have a detailed proposal for the next interval. Specific Aims. Experiments listed. But it also has all of the prior accomplishments pushed into the center of the review.

So what is the problem? Why are Collins and Lauer proposing to make the NIH grant selection even more based on the research program? Well, times have changed. The figure here is a bit dated by now but I like to keep refreshing your view of it because NIH has this nasty tendency to truncate their graphs to only the past decade or so. The NIH does this to obscure just how good investigators had things in the 80s. That was when established investigators enjoyed success rates north of 40%. For all applications, not just for competing renewals. Many of the people who started their careers in those wonderful days are still very much with us, by the way. This graph shows that within a few years of the end of the doubling, the success rates for established investigators had dropped to about where the new investigators were in the 1980s. Success rates have only continued to get worse but thanks to policies enacted by Zerhouni, the established and new investigator success rates have been almost identical since 2007.
Interestingly, one of the things Zerhouni had to do was to insist that Program change their exception pay behavior. (This graph was recreated from a GAO report [PDF], page down to Page 56, PDF page 60.) It is relevant because it points to yet another way that the NIH system used to prioritize program qualities over the project qualities. POs historically were much more interested in "saving" previously funded, now unfunded, labs than they were in saving not-yet-funded labs.

Now we get to Morgan Price's point about "the best science". Should the NIH system be purely project-based? Can we get the best science one 5 year plan at a time?

I say no. Five years is not enough time to spool up a project of any heft into a well honed and highly productive gig. Successful intervals of 5 year grants depend on what has come before to a very large extent. Often times, adding the next 5 years of funding via Renewal leads to an even more productive time because it leverages what has come before. Stepping back a little bit, gaps in funding can be deadly for a project. A project that has been killed off just as it is getting good is not only not the "best" science it is hindered science. A lack of stability across the NIH system has the effect of making all of its work even more expensive because something headed off in Lab 1 (due to gaps in funding) can only be started up in Lab 2 at a handicap. Sure Lab 2 can leverage published results of Lab 1 but not the unpublished stuff and not all of the various forms of expertise locked up in the Lab 1 staff's heads.

Of course if too much of the NIH allocation goes to sinecure program-based funding to continue long-running research programs, this leads to another kind of inefficiency. The inefficiencies of opportunity cost, stagnation, inflexibility and dead-woodery.

So there is a balance. Which no doubt fails to satisfy most everyone's preferences.

Collins and Lauer propose to do a bit of re-balancing of the program-based versus project-based relationship, particularly when it comes to younger investigators. This is not radical change. It might even be viewed in part as a selective restoration of past realities of grant funded science careers.

*In theory the PI's grants are listed on the Biosketch merely to show the PI is capable of leading a project something like the one under review. Correspondingly, it would in theory be okay to just list the most successful ones and leave out the grant awards with under-impressive outcomes. After all, do you have to put in every paper? no. Do you have to put every bit of bad data that you thought might be preliminary data into the app? no. So why do you have to** list all of your grants? This is the program-based aspects of the system at work.

**dude, you have to. this is one of those culture of review things. You will be looked up on RePORTER and woe be to you if you try to hide some project, successful or not, that has active funding within the past three years.

14 responses so far

Older posts »