Archive for the 'NIH funding' category

NIGMS MIRA for ESI/NI differs slightly. Ok, fundamentally.

Jun 03 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

NIGMS has been attempting to grapple with the problem of stability in research funding for its extramural awardees. Which is a great thing to focus on, given the instability in recent years and the wasted time and effort of PIs and their laboratories which is devoted to maintaining stable funding.

In January NIGMS launched the MIRA program (R35 mechanism) to issue 5 year awards (instead of current NIGMS average of 4) of up to $750,000 in direct costs. The idea is that current NIGMS awardees would consolidate their existing NIGMS awards into this one R35, promise to devote at least 51% of their research effort to this R35 and overall take less in NIGMS funding.

The immediate objections were severalfold but more or less focused on why the already privileged NIGMS stalwarts with three or more concurrent full-modular ($250,000 direct) awards' worth of funding should now get this extra isolation from the review process. If the limited number of such individuals selected for MIRA now had research funds that were isolated from the grasp of peer review (the fifth year, the all-or-none nature of the $750,000 direct in one award) then obviously the unlucky would be further disadvantaged.

One such pool of the unlucky would be the ESI (and NI) investigators.

NIGMS assured us that they were planning to extend MIRA to ESI/NI in the very near future. Peter Preusch commented:

We plan to issue a MIRA funding opportunity for early stage investigators as quickly as possible. We hope the first application due date will be sometime this summer.

Well, RFA-GM-16-003 has arrived. And it is nothing like the real MIRA for the highly established insider club* of NIGMS extramural funding.

1) It is limited to $250,000 in direct costs
2) It will be for the duration of the "current average of R01 awards to new investigators", read 4 years, I assume. Even if the current average is 5, this can change. Why not just write in 5 as for the main MIRA?
3) Competing renewals "may" be allowed to increase substantially. There is of course no guarantee of this and if they were serious they could have simply written in language such as "the second interval will increase the limit to $500K direct and the third to $750K direct". They did not.

This is either ridiculously ill-considered or a cynical figleaf designed to give political cover for the excesses of the real target, the MIRA for the highly-established.

Here is what is so fundamentally foot-shooting about this, if you assume that NIGMS has any interest in shepherding the careers of their future stalwarts. The current stalwarts they are trying to protect are multi-grant awardees. Three full-modular and two-plus if you assume one of those awards is a traditional budget up to the $500K stiff (but not insurmountable) limit. Yet here they are trying to take what might be thought of as this same population at an earlier career stage and making sure they only get one full-modular worth of NIGMS funding from the start. This is insanely ill-considered.

And no ESI PI who thinks of herself as a future multi-grant NIGMS stalwart (and perhaps real-MIRA qualified) should have any interest in this baby MIRA whatsoever. All it comes with are limits for such a PI.

A secondary consideration is the review of such applications. Wisely, NIGMS has made this an RFA which means they get to design their own review panels.

This is wise because these special-flower-protection grants (real MIRA and baby-MIRA alike) stand a good risk of getting shredded in regular study sections. I'm thinking there is a good risk of them getting shredded in whatever SEPs they manage to convene too, unless they do a good job of selecting quid-pro-quo qualified reviewers.

Related Aside: BigMechs like Program Projects and Centers are very often reviewed by panels of other BigMech Program Directors and component PIs. This is consistent with the general requirement that grants should be reviewed by panels with like-experience. However, this lets in a great deal of quid-pro-quo reviewing in the sense that the reviewers know these applicants will be coming back to review their Boondoggles, sorry BigMechs when they are up for competing review. Thus, these mechanisms are very unlikely to face review of the kind that disagrees fundamentally with the concept of the BigMech. Unlikely to get anyone saying "none of this is worth the cost, these shouldn't be funded and the money should be put back** in the R-mech pool".

Regular R-mech study sections are disproportionally staffed by midcareer scientists. Given the likely number of MIRAs on offer, disproportionally staffed by scientists who will not feel like they have the slightest chance at a MIRA award. I predict a good deal of skepticism from the general reviewer about these R35 mechanisms and I predict very bad scores.

Which is why NIGMS will have to be careful to cherry pick a quid-pro-quo qualified reviewer pool. And, as is usually the case with BigMechs, be prepared to fund them with scores that would not be remotely competitive for regular R01 review.

*Remember, PIs with more than two concurrent RPGs are less than 9% of the entire NIH funded population (in FY2009, according to Rockey). How many can there be with three or more NIGMS awards?

**there are some technicalities with pools of $$ that make this slightly more complicated than this but you get the flavor

35 responses so far

Thought on the Ginther report on NIH funding disparity

May 24 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

I had a thought about Ginther just after hearing a radio piece on the Asian-Americans that are suing Harvard over entrance discrimination. 

The charge is that Asian-American students need to have better grades and scores  than white students to receive an admissions bid. 

The discussion of the Ginther study revolved around the finding that African-American applicant PIs were less likely than PIs of other groups to receive NIH grant funding. This is because Asian-Americans, for example, did as well as white PIs. Our default stance, I assume, is that being a white PI is the best that it gets. So if another group does as well, this is evidence of a lack of bias. 

But what if Asian-American PIs submit higher quality applications as a group? 
How would we ever know if there was discrination against them in NIH grant award?

20 responses so far

Thoughts on NIH grant strategy from Associate Professor H. Solo

We spend a fair amount of time talking about grant strategy on this blog. Presumably, this is a reflection of an internal process many of us go through trying to decide how to distribute our grant writing effort so as to maximize our chances of getting funded. After all we have better things to do than to write grants.

So we scrutinize success rates for various ICs, various mechanisms, FOAs, etc as best we are able. We flog RePORTER for evidence of which study sections will be most sympathetic to our proposals and how to cast our applications so as to be attractive. We worry about how to construct our Biosketch and who to include as consultants or collaborators. We obsess over how much preliminary data is enough (and too much*).

This is all well and good and maybe, helps.

But at some level, you have to follow your gut, too. Even when the odds seem overwhelmingly bad, there are going to be times when dang it, you just feel like this is the right thing to do.

Submitting an R01 on very thin preliminary data because it just doesn't work as an R21 perhaps.

Proposing an R03 scope project even if the relevant study section has only one** of them funded on the RePORTER books.

Submitting your proposal when the PO who will likely be handling it has already told you she hates your Aims***.

Revising that application that has been triaged twice**** and sending it back in as a A2asA0 proposal.

I would just advise that you take a balanced approach. Make your riskier attempts, sure, but balance those with some less risky applications too.

I view it as....experimenting.

*Just got a question about presenting too much preliminary data the other day.

**of course you want to make sure there is not a structural issue at work, such as the section stopped reviewing this mechanism two years ago.

***1-2%ile scores have a way of softening the stony cold heart of a Program Officer. Within-payline skips are very, very rare beasts.

****one of my least strategic behaviors may be in revising grants that have been triaged. Not sure I've ever had one funded after initial triage and yet I persist. Less so now than I used to but.....I have a tendency. Hard headed and stupid, maybe.

13 responses so far

Fighting with the New Biosketch format

I have been flailing around, of and on for a few months, trying to write my Biosketch into the new format [Word doc Instructions and Sample].


I am not someone who likes to prance around bragging about "discoveries" and unique contributions and how my lab's work is I am so awesomely unique because, let's face it, I don't do that kind of work. I am much more of a work-a-day type of scientist who likes to demonstrate stuff that has never been shown before. I like to answer what are seemingly obvious questions for which there should be lots of literature but then it turns out that there is not. I like to work on what interests me about the world and I am mostly uninterested in what some gang of screechy monkey GlamourHumpers think is the latest and greatest.


This is getting in the way of my ability to:

Briefly describe up to five of your most significant contributions to science. For each contribution, indicate the historical background that frames the scientific problem; the central finding(s); the influence of the finding(s) on the progress of science or the application of those finding(s) to health or technology; and your specific role in the described work.

Now interestingly, it was someone who works in a way most unlike the way I do that showed me the light. Actually, he gave me the courage to think about ignoring this supposed charge in the sample / instruction document. This person recommended just writing a brief sentence or two about the area of work without trying to contextualize the importance or significance of the "contribution". I believe I actually saw one of the five permitted subheadings on his version that was more or less "And here's some other stuff we work on that wasn't easily categorized with the rest of it."

I am at least starting from this minimalist standpoint. I don't know if I will have the courage to actually submit it like this, but I'm leaning towards doing so.

I have been hearing from quite a number of you that you are struggling with creating this new version of the NIH Biosketch. So I thought I'd open it up to comment and observation. Anyone have any brilliant solutions / approaches to recommend?

One of the things that has been bothering me most about this is that it takes the focus off of your work that is specific to the particular application in question. In the most recent version of the Biosketch, you selected 15 pubs that were most directly relevant to the topic at hand. These may not be your "most significant contributions" but they are the ones that are most significant for the newly proposed studies.

If one is now to list "your most significant contributions", well, presumably some of these may not have much to do with the current application. And if you take the five sections seriously, it is hard to parse the subset of your work that is relevant to one focal R01 sized project into multiple headings and still show now those particular aspects are a significant contribution.

I still think it is ridiculous that they didn't simply make this an optional way to do the Biosketch so as to accommodate those people that needed to talk about non-published scholarly works.

64 responses so far

McKnight posts an analysis of NIH peer review

Apr 08 2015 Published by under NIH, NIH Budgets and Economics, NIH funding, Peer Review


In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.

He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.

Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.

The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .

As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:

48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.

These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.

This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?

Anyway, why focus on these folks?

I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.

So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.

So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.

Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.

Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.

McKnight closes with this:

I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.

Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".

This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.

McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?

h/t: philapodia

51 responses so far

Repost: More data on historical success rates for NIH grants

Feb 17 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

Our recent discussion of topics related to the Emeritus Award being considered by the NIH powers that be has been robust. I, of course, have been reminding one of the target demographic scientists that she and her generation have had a pretty good run under the NIH system. It seemed like a good moment to remind everyone that there are data upon which to base our understanding of how difficult it has and has not been for various scientific generations. Time to repost an older blog entry.

This was first posted 11 July 2012.

Thanks to a query from a reader off the blog and a resulting request from me, our blog-friend microfool pointed us to some data. Since I don't like Tables, and the figure on the excel file stinks, here is a different graphical depiction:

The red trace depicts success rates from 1962 to 2008 for R01 equivalents (R01, R23, R29, R37). Note that they are not broken down by experienced/new investigators status, nor are new applications distinguished from competing continuation applications. The blue line shows total number of applications reviewed...which may or may not be of interest to you. [update 7/12/12: I forgot to mention that the data in the 60s are listed as "estimated" success rates.]

The bottom line here is that looking at the actual numbers can be handy when playing the latest round of "We had it tougher than you did" at the w(h)ine and cheese hour after departmental seminar. Success rates end at an unusually low point...and these numbers stop in 2008. We're seeing 15% for R01s (only) in FY2011.

Things are worse than they've ever been and these dismal patterns have bee sustained for much longer. If we look at the ~30% success rates that ruled the day from 1980-2003, the divergence from the trend from about 1989 to 1996 was interrupted in the middle and, of course, saw steady improvement in the latter half. The badness that started in FY2004 has been 8 unrelieved Fiscal Years and shows no sign of abatement. Plus, the nadir (to date) is much lower.

Anyone who tries to tell you they had it as hard or harder at any time in the past versus now is high as a kite. Period.

Now, of course, it IS true that someone may have had it more difficult in the past than they do now, simply because it has always been harder for the inexperienced PIs to win their funding.

As we know from prior posts, career-stage differences matter a LOT. In the 80s when the overall success rate was 30%, you can see that newcomers were at about 20% and established investigators were enjoying at least a 17%age point advantage (I think these data also conflate competing continuation with new applications so there's another important factor buried in the "Experienced" trace.) Nevertheless, since the Experienced/New gap was similar from 1980 to 2006, we can probably assume it held true prior to that interval as well.

3 responses so far

Closeout funding

Feb 05 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

Within the past thre years or so I had a Program Officer mention the idea of "closeout funding" to me.

One of my top few flabbergasting moments as an extramural scientist.

It referred, of course, to them using program discretion to give a softer landing to one of their favored long-time PIs who had failed to get a fundable score on a competing renewal. It was said in a context that made it clear it was a regular thing in their decision space.

This explains an awful lot of strange R56 BRIDGE (to nowhere) awards, I thought to myself.

I bring this up because I think it relates to this week's discussion of the proposed "emeritus award" concept.

13 responses so far

Your Grant in Review: Credible

Jan 30 2015 Published by under NIH, NIH Careerism, NIH funding, Peer Review

I am motivated to once again point something out.

In ALL of my advice to submit grant applications to the NIH frequently and on a diversity of topic angles, there is one fundamental assumption.

That you always, always, always send in a credible application.

That is all.

17 responses so far

These ILAF types just can't help sounding selfishly elitist, can they?

Good Gravy.

One David Korn of the Massachusetts General Hospital and Harvard Medical School has written a letter to Nature defending the indirect cost (IDC; "overhead") rates associated with NIH grants. It was submitted in response to a prior piece in Nature on IDC which was, to my eye, actually fairly good and tended to support the notion that IDC rates are not exorbitant.

But overall, the data support administrators’ assertions that their actual recovery of indirect costs often falls well below their negotiated rates. Overall, the average negotiated rate is 53%, and the average reimbursed rate is 34%.

The original article also pointed out why the larger private Universities have been heard from loudly, while the frequent punching-bag smaller research institutes with larger IDC rates are silent.

Although non-profit institutes command high rates, together they got just $611 million of the NIH’s money for indirect costs. The higher-learning institutes for which Nature obtained data received $3.9 billion, with more than $1 billion of that going to just nine institutions, including Johns Hopkins University in Baltimore, Maryland, and Stanford (see ‘Top 10 earners’).

Clearly Dr. Korn felt that this piece needed correction:

Aspects of your report on US federal funding of direct research costs and the indirect costs of facilities and administration are misleading (Nature 515, 326–329; 2014).

Contrary to your claim, no one is benefiting from federal largesse. Rather, the US government is partially reimbursing research universities for audit-verified indirect costs that they have already incurred.

Ok, ok. Fair enough. At the very least it is fine to underline this point if it doesn't come across in the original Nature article to every reader.

The biomedical sciences depend on powerful technologies that require special housing, considerable energy consumption, and maintenance. Administration is being bloated by federal regulations, many of which dictate how scientists conduct and disseminate their research. It is therefore all the more remarkable that the share of extramural research spending on indirect costs by the US National Institutes of Health (NIH) has been stable at around 30% for several decades.

Pretty good point.

But then Korn goes on to step right in a pile.

Negotiated and actual recovery rates for indirect costs vary across the academic community because federal research funding is merit-based, not a welfare programme.

You will recognize this theme from a prior complaint from Boston-area institutions.

“There’s a battle between merit and egalitarianism,” said Dr. David Page, director of the Whitehead Institute, a prestigious research institution in Cambridge affiliated with MIT.

Tone deaf, guys. Totally tone deaf. Absolutely counter-productive to the effort to get a majority of Congress Critters on board with support for the NIH mission. Hint: Your various Massachusetts Critters get to vote once, just like the Critters from North and South Dakota, Alabama and everywhere else that doesn't have a huge NIH-funded research enterprise.

And why Korn chooses to use a comment about IDC rates to advance this agenda is baffling. The takeaway message is that he thinks that higher IDC rates are awarded because His Awesome University deserves it due to the merit of their research. This totally undercuts the point he is trying to make, which is presumably "institutions may be private or public, urban or rural, with different structures, sizes, missions and financial anatomies.".

I just don't understand people who are this clueless and selfish when it comes to basic politics.

23 responses so far

NIGMS will now consider PIs' "substantial unrestricted research support"

According to the policy on this webpage, the NIGMS will now restrict award of its grants when the applicant PI has substantial other research support. It is effective as of new grants submitted on or after 2 Jan, 2015.

The clear statement of purpose:

Investigators with substantial, long-term, unrestricted research support may generally hold no more than one NIGMS research grant.

The detail:

For the purposes of these guidelines, investigators with substantial, long-term, unrestricted support (“unrestricted investigators”) would have at least $400,000 in unrestricted support (direct costs excluding the principal investigator’s salary and direct support of widely shared institutional resources, such as NMR facilities) extending at least 2 years from the time of funding the NIGMS grant. As in all cases, if NIGMS funding of a grant to an investigator with substantial, long-term, unrestricted support would result in total direct costs from all sources exceeding $750,000, National Advisory General Medical Sciences Council approval would be required

This $400,000 limit, extending for two years would appear to mean $200,000 per year in direct costs? So basically the equivalent of a single additional R01-worth of direct cost funding?

I guess they are serious about the notion that two grants is fine but three-R01-level funding means you are a greedy commons-spoiling so-and-so.

51 responses so far

« Newer posts Older posts »