Archive for the 'NIH Careerism' category

GAO report shows the continued NIH grant funding disparity for underrepresented PIs

Aug 15 2018 Published by under NIH, NIH Careerism, NIH funding, Underrepresented Groups

A comment from pielcanelaphd on a prior post tips us off to a new report (PDF) from the General Accountability Office, described as a report to Congressional Committees.

The part of the report that deals with racial and ethnic disparities is mostly recitation of the supposed steps NIH has been taking in the wake of the Ginther report in 2011. But what is most important is the inclusion of Figure 2, an updated depiction of the funding rate disparity.
GAO-18-545:NIH RESEARCH Action Needed to Ensure Workforce Diversity Strategic Goals Are Achieved

These data are described mostly as the applicant funding rate or similar. The Ginther data focused on the success rate of applications from PIs of various groups. So if these data are by applicant PI and not by applications, there will be some small differences. Nevertheless, the point remains that things have not improved and PIs from underrepresented ethnic and racial groups experience a disparity relative to white PIs.

No responses yet

Senator Murray and Representative DeLauro Want to Know What NIH Is Doing About Sexual Harassment

Readers of this blog will not need too much reminder that sexual harassment and sex-based workplace discrimination are very much a problem in academic science. We have seen numerous cases of this sort of academic misconduct reach the national and sometimes international press in the past several years. Indeed, recent discussions on this blog have mentioned the cases of Thomas Jessell and Inder Verma as well as three cases at Dartmouth College.

In these cases, and ones of scientific fraud, I and others have expressed frustration that the NIH does not appear to use what we see as its considerable power of the purse and bully pulpit to discourage future misconduct. My view is that since NIH award is a privilege and not a right, the NIH could do a lot to help their recipient institutions see that taking cases of misconduct more seriously is in their (the recipient institution's) best interest. They could pull the grants associated with any PI who has been convicted of misconduct, instead of allowing the University to appoint a replacement PI. They could refuse to make any new awards or, less dramatically, make any exception pickups if they aren't happy with the way the University has been dealing with misconduct. They could focus on training grants or F-mech fellowships if they see a particular problem in the treatment of trainees. Etc. Lots of room to work since the NIH decides all the time to fund this grant and not that grant for reasons other than the strict order of review.

Well, two Democratic members of Congress have sent a letter (PDF) to NIH Director Francis Collins gently requesting* information on how NIH is addressing sexual harassment in the workplace. And the overall message is in line with the above belief that NIH can and should play a more active role in addressing sexual misconduct and harassment.

As pointed out in a Mike the Mad Biologist's post on this letter, these two Congresspeople have a lot of potential power if the Democrats return to the majority.

are ranking members of committees that oversee NIH funding–and if the Democrats take back the House or Senate, would be the leaders of those committees.

One presumes that the NIH will be motivated to take this seriously and offer up some significant response. Hopefully they can do this by what seems a rather optimistic deadline of 8/17/2018, given the letter was dated 8/06/2018.

The first 6 listed items to which NIH is being asked to response seem mostly to do with the workings of Intramural NIH, both Program and the IRP. Those are of less interest as a dramatic change, important as they are.

Most importantly, the letter puts the NIH squarely on the hook for the way that it ensures that the extramural awardee institutions are behaving. Perhaps obviously, the power of NIH to oversee issues of harassment at all of the Universities, Institutes and companies that they fund is limited. The main point of justification in this letter is the NOT-OD-15-152: Civil Rights Protections in NIH-Supported Research, Programs, Conferences and Other Activities.

To give you a flavor:

Federal civil rights laws prohibit discrimination on the basis of race, color, national origin, disability, and age in all programs and activities that receive Federal financial assistance, and prohibit discrimination on the basis of sex in educational programs or activities conducted by colleges and universities. These protections apply in all settings where research, educational programs, conferences, and other activities are supported by NIH, and apply to all mechanisms of support (i.e., grant awards, contracts and cooperative agreements). The civil rights laws protect NIH-supported investigators, students, fellows, postdocs, participants in research, and other individuals involved in activities supported by NIH.

The notice then goes on to list several specific statutes, some of which are referenced in footnotes to the letter.
The Murray/DeLauro letter concentrates on the obligation recipient institutions have to file an Assurance of Compliance with the Health and Human Services (NIH's parent organization) Office of Civil Rights and the degree to which NIH exercises oversight on these Assurances.

I think the motivations of Senatory Murray and Rep DeLauro are on full display in this passage (emphasis added).

"It therefore appears that NIH's only role...is confirming...institution has signed, dated, and mailed the compliance document....

This lack of engagement from NIH is particularly unacceptable in light of disturbing news reports that cases of sexual harassment in the academic sciences often involve high profile faculty offenders whose behavior is considered an 'open secret'.

...colleagues may have warned new faculty and students.....but institutions themselves take little to no action."

It is on.

__
*demanding

7 responses so far

When NIH uses affirmative action to fix a bias

Jul 20 2018 Published by under Anger, Fixing the NIH, NIH, NIH Careerism

We have just learned that in addition to the bias against black PIs when they try to get research funding (Ginther et al., 2011), Asian-American and African-American K99 applicants are also at a disadvantage. These issues trigger my usual remarks about how NIH has handled observed disparities in the past. In the spirit of pictures being worth more than words we can look up the latest update on success rates for RPG (a laundry list of research grant support mechanisms) broken down by two key factors.
First up is the success rate by the gender of the PI. As you can see very clearly, something changed in 2003. All of a sudden a sustained advantage for men disappeared. Actually two things happened. This disparity was "fixed" and the year after success rates went in the tank for everyone. There are a couple of important observations. The NIH didn't suddenly fix whatever was going on in study section, I guaranfrickentee it. I guarantee there were not also any magic changes in the pipeline or female PI pool or anything else. I guarantee you that the NIH decided to equalize success rates by heavy handed top-down affirmative action policies in the nature of "make it so" and "fix this". I do not recall ever seeing anything formal so, hey, I could be way off base. If so, I look forward to any citation of information showing change in the way they do business that coincided directly with the grants submitted for the FY2003 rounds.
The second thing to notice here is that women's success rates never exceeded that for men. Not for fifteen straight Fiscal Years. This further supports my hypothesis that the bias hasn't been fixed in some fundamental way. If it had been fixed, this would be random from year to year, correct? Sometimes the women's rates would sneak above the men's rates. That never happens. Because of course when we redress a bias, it can only ever just barely reach statistically indistinguishable parity and if god forbid the previously privileged class suffers even the tiniest little bit of disadvantage it is an outrage.
Finally, the fact that success rates went in the tanker in 2004 should remind you that men enjoyed the advantage all during the great NIH doubling! The salad days. Lots of money available and STILL it was being disproportionately sucked up by the advantaged group. You might think that when there is an interval of largesse that systems would be more generous. Good time to slip a little extra to women, underrepresented individuals or the youth, right? Ha.

Which brings me to the fate of first-time investigators versus established investigators. Oh look, the never-funded were instantly brought up to parity in 2007. In this case a few years after the post-doubling success rates went in the toilet but more or less the same pattern. Including the failure of the statistically indistiguishable success rates for the first timers to ever, in 11 straight years of funding, to exceed the rates for established investigators. Because of affirmative action instead of fixing the bias. As you will recall, the head of the NIH at that time made it very clear that he was using "make it so" top-down heavy handed quota based affirmative action to accomplish this goal.

Zerhouni created special awards for young scientists but concluded that wasn't enough. In 2007, he set a target of funding 1500 new-investigator R01s, based on the previous 5 years' average.

Some program directors grumbled at first, NIH officials say, but came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

"quotas".

I do not recall much in the way of discussing the "pipelines" and how we couldn't possible do anything to change the bias of study sections until a new, larger and/or better class of female or not-previously-funded investigators could be trained up. The NIH just fixed it. ish. permanently.

For FY2017 there were 16,954 applications with women PIs. 3,186 awards. If you take the ~3% gap from the interval prior to 2003, this means that the NIH is picking up some 508 research project grants from women PIs via their affirmative action process. Per year. If you apply the ~6% deficit enjoyed by first time investigators in the salad days you end up with 586 research project grants picked up by affirmative action. Now there will be some overlap of these populations. Women are PI of about 31% of applications in the data for the first graph and first timers are about 35% for the second. So very roughly women might be 181 of the affirmative action newbie apps and newbies might be 178 of the affirmative action women's apps. The estimates are close. So let's say something like 913 unique grants are picked up by the NIH just for these two overt affirmative action purposes. Each and every Fiscal Year.

Because of the fact that, for example, African-American PIs of research grants or K99 apps represent such tiny percentages of the total (2% in both cases), the number of pickups that would be necessary to equalize success rate disparities is tiny. In the K99 analysis, it was a mere 23 applications across a decade. Two per year. I don't have research grant numbers handy but if we use the data underlying the first graph, this means there were about 1,080 applications with African-American PIs in FY2017. If they hit the 19% success rate this would be about 205 applications. Ginther reported about a 13% success rate deficit, working out to 55% of the success rate enjoyed by white applicants at the time. This would correspond to a 10.5% success rate for black applicants now, or about 113 application. So 92 would be needed to make up the difference for African-American PIs assuming the Ginther disparity still holds. This would be less than one percent of the awards made.

Less than one percent. And keep in mind these are not gifts. These are making up for a screwjob. These are making up for the bias. If any applicants from male, established or white populations go unfunded to redress the bias, they are only losing their unearned advantage. Not being disadvantaged.

28 responses so far

Racial Disparity in K99 Awards and R00 Transitions

Oh, what a shocker.

In the wake of the 2011 Ginther finding [see archives on Ginther if you have been living under a rock] that there was a significant racial bias in NIH grant review, the concrete response of the NIH was to blame the pipeline. Their only real dollar, funded initiatives were to attempt to get more African-American trainees into the science pipeline. The obvious subtext here was that the current PIs, against whom the grant review bias was defined, must be the problem, not the victim. Right? If you spend all your time insisting that since there were not red-fanged, white-hooded peer reviewers overtly proclaiming their hate for black people that peer review can't be the problem, and you put your tepid money initiatives into scraping up more trainees of color, you are saying the current black PIs deserve their fate. Current example: NIGMS trying to transition more underrepresented individuals into faculty ranks, rather than funding the ones that already exist.

Well, we have some news. The Rescuing Biomedical Research blog has a new post up on Examining the distribution of K99/R00 awards by race authored by Chris Pickett.

It reviews success rates of K99 applicants from 2007-2017. Application PI demographics broke down to nearly 2/3 White, ~1/3 Asian, 2% multiracial and 2% black. Success rates: White, 31%, Multiracial, 30.7%, Asian, 26.7%, Black, 16.2%. Conversion to R00 phase rates: White, 80%, Multiracial, 77%, Asian, 76%, Black, 60%.

In terms of Hispanic ethnicity, 26.9% success for K99 and 77% conversion rate, neither significantly different from the nonHispanic rates.

Of course, seeing as how the RBR people are the VerySeriousPeople considering the future of biomedical careers (sorry Jeremy Berg but you hang with these people), the Discussion is the usual throwing up of hands and excuse making.

"The source of this bias is not clear...". " an analysis ...could address". "There are several potential explanations for these data".

and of course
"put the onus on universities"

No. Heeeeeeyyyyyuuullll no. The onus is on the NIH. They are the ones with the problem.

And, as per usual, the fix is extraordinarily simple. As I repeatedly observe in the context of the Ginther finding, the NIH responded to a perception of a disparity in the funding of new investigators with immediate heavy handed top-down quota based affirmative action for many applications from ESI investigators. And now we have Round2 where they are inventing up new quota based affirmative action policies for the second round of funding for these self-same applicants. Note well: the statistical beneficiaries of ESI affirmative action polices are white investigators.

The number of K99 applications from black candidates was 154 over 10 years. 25 of these were funded. To bring this up to the success rate enjoyed by white applicants, the NIH need only have funded 23 more K99s. Across 28 Institutes and Centers. Across 10 years, aka 30 funding cycles. One more per IC per decade to fix the disparity. Fixing the Asian bias would be a little steeper, they'd need to fund another 97, let's round that to 10 per year. Across all 28 ICs.

Now that they know about this, just as with Ginther, the fix is duck soup. The Director pulls each IC Director aside in quiet moment and says 'fix this'. That's it. That's all that would be required. And the Directors just commit to pick up one more Asian application every year or so and one more black application every, checks notes, decade and this is fixed.

This is what makes the NIH response to all of this so damn disturbing. It's rounding error. They pick up grants all the time for reasons way more biased and disturbing than this. Saving a BSD lab that allegedly ran out of funding. Handing out under the table Administrative Supplements for gawd knows what random purpose. Prioritizing the F32 applications from some labs over others. Ditto the K99 apps.

They just need to apply their usual set of glad handing biases to redress this systematic problem with the review and funding of people of color.

And they steadfastly refuse to do so.

For this one specific area of declared Programmatic interest.

When they pick up many, many more grants out of order of review for all their other varied Programmatic interests.

You* have to wonder why.
__
h/t @biochembelle

*and those people you are trying to lure into the pipeline, NIH? They are also wondering why they should join a rigged game like this one.

13 responses so far

Startup Funds That Expire On Grant Award

Jul 11 2018 Published by under Academics, Ask DrugMonkey, NIH Careerism

From the email bag:

My question is: Should institutions pull back start-up funds from new PIs if R01s or equivalents are obtained before funds are burned? Should there be an expiration date for these funds?

Should? Well no, in the best of all possible worlds of course we would wish PIs to retain all possible sources of support to launch their program.

I can, however, see the institutional rationale that startup is for just that, starting. And once in the system by getting a grant award, the thinking goes, a PI should be self-sustaining. Like a primed pump.

And those funds would be better spent on starting up the next lab's pump.

The expiration date version is related, and I assume is viewed as an inducement for the PI to go big or go home. To try. Hard. Instead of eking it out forever to support a lab that is technically in operation but not vigorously enough to land additional extramural funding.

Practically speaking the message from this is to always check the details for a startup package. And if it expires on grant award, or after three years, this makes it important to convert as much of that startup into useful Preliminary Data as possible. Let it prime many pumps.

Thoughts, folks? This person was wondering if this is common. How do your departments handle startup funds?

10 responses so far

Trophy collaborations

Jul 05 2018 Published by under Conduct of Science, NIH, NIH Careerism

Jason Rasgon noted a phenomenon where one is asked to collaborate on a grant proposal but is jettisoned after funding of the award:

I'm sure there are cases where both parties amicably terminate the collaboration but the interesting case is where the PI or PD sheds another investigator without their assent.

Is this common? I can't remember hearing many cases of this. It has happened to me in a fairly minor way once but then again I have not done a whole lot of subs on other people's grants.

17 responses so far

Your Grant in Review: Scientific Premise

Jul 03 2018 Published by under NIH Careerism, NIH funding

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was...great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don't know about all of y'all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don't need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

2 responses so far

Repost- Your Grant in Review: Competing Continuation, aka Renewal, Apps

May 11 2018 Published by under Fixing the NIH, NIH, NIH Careerism

Two recent posts discuss the topic of stabilizing NIH funding within a PI's career, triggered by a blog post from Mike Lauer and Francis Collins. In the latter, the two NIH honchos claim to be losing sleep over the uncertainty of funding in the NIH extramural granting system, specifically in application to those PIs who received funding as an ESI and are now trying to secure the next round of funding.

One key part of this, in my view, is how they (the NIH) and we (extramural researchers, particularly those reviewing applications for the NIH) think about the proper review of Renewal (formerly known as competing continuation) applications. I'm reposting some thoughts I had on this topic for your consideration.

This post originally appeared Jan 28, 2016.
___
In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

No responses yet

Stability of funding versus the project-based funding model of the NIH

May 09 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

In response to a prior post, Morgan Price wonders about the apparent contrast of NIH's recent goal to stabilize research funding and the supposed "project-based" model.

I don't see how stability based funding is consistent with project-based funding and "funding the best science". It would be a radical change...?

NIH grants are supposed to be selected and awarded on the basis of the specific project that is proposed. That is why there is such extensive detailing of a very specific area of science, well specified Specific (not General!) Aims and a listing of specific experiments.

They are not awarded on the basis of a general program of research that seems to be promising for continued funding.

Note that there are indeed mechanisms of funding that operate on the program level to much greater extent. HHMI being one of the more famous ones of these. In program based award, the emphasis is on what the investigating team (and generally this means specifically the PI) has accomplished and published in recent years. There may be some hints about what the person plans to work on next but generally the emphasis is on past performance, rather than the specific nature of the future plan.

In the recent handwringing from NIH about how investigators that they have launched with special consideration for their newcomer status (e.g., the Early Stage Investigator PI applications can be funded at lower priority scores / percentile ranks than would be needed by an established investigator.

if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere.

"Stabilizing", Morgan Price assumes is the same thing as a radical change. It is not.

Here's the trick:

The NIH funding system has always been a hybrid which pays lip service to "project based funding" as a model while blithely using substantial, but variable, input from the "program based" logic. First off, the "Investigator" criterion of proposal review is one of 5 supposedly co-equal major criteria. The Biosketch, which details the past accomplishments and skills of the PI) is prominent in the application. This Biosketch lists both papers and prior research grant support* which inevitably leads to some degree of assessment of how productive the PI was with her prior awards. This then is used to judge the merit of the proposal that is under current review - sounds just a bit like HHMI, doesn't it?

The competing continuation application (called a Renewal application now) is another NIH beast that reveals the hybrid nature of the selection system. You are allowed to ask for no more than 5 years of support for a given project, but you can then ask for successive five year extensions via competitive application review. This type of proposal has a "Progress Report" and a list of papers resulting from the project required within the application. This, quite obviously, focuses the review in large part on the past accomplishment. Now, sure, the application also has to have a detailed proposal for the next interval. Specific Aims. Experiments listed. But it also has all of the prior accomplishments pushed into the center of the review.

So what is the problem? Why are Collins and Lauer proposing to make the NIH grant selection even more based on the research program? Well, times have changed. The figure here is a bit dated by now but I like to keep refreshing your view of it because NIH has this nasty tendency to truncate their graphs to only the past decade or so. The NIH does this to obscure just how good investigators had things in the 80s. That was when established investigators enjoyed success rates north of 40%. For all applications, not just for competing renewals. Many of the people who started their careers in those wonderful days are still very much with us, by the way. This graph shows that within a few years of the end of the doubling, the success rates for established investigators had dropped to about where the new investigators were in the 1980s. Success rates have only continued to get worse but thanks to policies enacted by Zerhouni, the established and new investigator success rates have been almost identical since 2007.
Interestingly, one of the things Zerhouni had to do was to insist that Program change their exception pay behavior. (This graph was recreated from a GAO report [PDF], page down to Page 56, PDF page 60.) It is relevant because it points to yet another way that the NIH system used to prioritize program qualities over the project qualities. POs historically were much more interested in "saving" previously funded, now unfunded, labs than they were in saving not-yet-funded labs.

Now we get to Morgan Price's point about "the best science". Should the NIH system be purely project-based? Can we get the best science one 5 year plan at a time?

I say no. Five years is not enough time to spool up a project of any heft into a well honed and highly productive gig. Successful intervals of 5 year grants depend on what has come before to a very large extent. Often times, adding the next 5 years of funding via Renewal leads to an even more productive time because it leverages what has come before. Stepping back a little bit, gaps in funding can be deadly for a project. A project that has been killed off just as it is getting good is not only not the "best" science it is hindered science. A lack of stability across the NIH system has the effect of making all of its work even more expensive because something headed off in Lab 1 (due to gaps in funding) can only be started up in Lab 2 at a handicap. Sure Lab 2 can leverage published results of Lab 1 but not the unpublished stuff and not all of the various forms of expertise locked up in the Lab 1 staff's heads.

Of course if too much of the NIH allocation goes to sinecure program-based funding to continue long-running research programs, this leads to another kind of inefficiency. The inefficiencies of opportunity cost, stagnation, inflexibility and dead-woodery.

So there is a balance. Which no doubt fails to satisfy most everyone's preferences.

Collins and Lauer propose to do a bit of re-balancing of the program-based versus project-based relationship, particularly when it comes to younger investigators. This is not radical change. It might even be viewed in part as a selective restoration of past realities of grant funded science careers.

__
*In theory the PI's grants are listed on the Biosketch merely to show the PI is capable of leading a project something like the one under review. Correspondingly, it would in theory be okay to just list the most successful ones and leave out the grant awards with under-impressive outcomes. After all, do you have to put in every paper? no. Do you have to put every bit of bad data that you thought might be preliminary data into the app? no. So why do you have to** list all of your grants? This is the program-based aspects of the system at work.

**dude, you have to. this is one of those culture of review things. You will be looked up on RePORTER and woe be to you if you try to hide some project, successful or not, that has active funding within the past three years.

14 responses so far

Addressing the Insomnia of Francis Collins and Mike Lauer

The Director of the NIH and the Deputy Director in charge of the office of extramural research have posted a blog post about The Issue that Keeps Us Awake at Night. It is the plight of the young investigator, going from what they have written.


The Working Group is also wrestling with the issue that keeps us awake at night – considering how to make well-informed strategic investment decisions to nurture and further diversify the biomedical research workforce in an environment filled with high-stakes opportunity costs. If we are going to support more promising early career investigators, and if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere. That will likely mean some belt-tightening in other quarters, which is rarely welcomed by the those whose belts are being taken in by a notch or two.

They plan to address this by relying on data and reports that are currently being generated. I suspect this will not be enough to address their goal.

I recently posted a link to the NIH summary of their history of trying to address the smooth transition of newly minted PIs into NIH-grant funded laboratories, without much comment. Most of my Readers are probably aware by now that handwringing from the NIH about the fate of new investigators has been an occasional feature since at least the Johnson Administration. The historical website details the most well known attempts to fix the problem. From the R23 to the R29 FIRST to the New Investigator check box, to the "sudden realization"* they needed to invent a true Noob New Investigator (ESI) category, to the latest designation of the aforementioned ESIs as Early Established Investigators for continued breaks and affirmative action. It should be obvious from the ongoing reinvention of the wheel that the NIH periodically recognizes that the most recent fix isn't working (and may have unintended detrimental consequences).

One of the reasons these attempts never truly work and have to be adjusted or scrapped and replaced by the next fun new attempt was identified by Zerhouni (a prior NIH Director) in about 2007. This was right after the "sudden realization" and the invention of the ESI. Zerhouni was quoted in a Science news bit as saying that study sections were responding to the ESI special payline boost by handing out ever worsening scores to the ESI applications.

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

Now, I would argue that viewing this trend of worsening scores as "punishing" is at best only partially correct. We can broaden this to incorporate a simple appreciation that study sections adapt their biases, preferences and evolved cultural ideas about grant review to the extant rules. One way to view worsening ESI scores may have to do with the pronounced tendency reviewers have to think in terms of fund it / don't fund it, despite the fact that SROs regularly exhort them not to do this. When I was on study section regularly, the scores tended to pile up around the perceived payline. I've seen the data for one section across multiple rounds. Reviewers were pretty sensitive to the scuttlebutt about what sort of score was going to be a fundable one. So it would be no surprise whatsoever to me if there was a bias driven by this tendency, once it was announced that ESI applications would get a special (higher) payline for funding.

This tendency might also be driven in part by a "Get in line, youngun, don't get too big for your britches" phenomenon. I've written about this tendency a time or two. I came up as a postdoc towards the end of the R29 / FIRST award era and got a very explicit understanding that some established PIs thought that newbies had to get the R29 award as their first award. Presumably there was a worsening bias against giving out an R01 to a newly minted assistant professor as their first award**, because hey, the R29 was literally the FIRST award, amirite?

sigh.

Then we come to hazing, which is the even nastier relative of the "Don't get to big for your britches". Oh, nobody will admit that it is hazing, but there is definitely a subcurrent of this in the review behavior of some people that think that noob PIs have to prove their worth by battling the system. If they sustain the effort to keep coming back with improved versions, then hey, join the club kiddo! (Here's an ice pack for the bruising). If the PI can't sustain the effort to submit a bunch of revisions and new attempts, hey, she doesn't really have what it takes, right? Ugh.

Scientific gate-keeping. This tends to cover a multitude of sins of various severity but there are definitely reviewers that want newcomers to their field to prove that they belong. Is this person really an alcohol researcher? Or is she just going to take our*** money and run away to do whatever basic science amazeballs sounded super innovative to the panel?

Career gate-keeping. We've gone many rounds on this one within the science blog- and twittospheres. Who "deserves" a grant? Well, reviewers have opinions and biases and despite their best intentions and wounded protestations...these attitudes affect review. In no particular order we can run down the favorite targets of the "Do it to Julia, not me, JULIA!" sentiment. Soft money job categories. High overhead Universities. Well funded labs. Translational research taking all the money away from good honest basic researchers***. Elite coastal Universities. Big Universities. R1s. The post-normative-retirement crowd. Riff-raff plodders.

Layered over the top of this is favoritism. It interacts with all of the above, of course. If some category of PI is to be discriminated against, there is very likely someone getting the benefit. The category of which people approve. Our club. Our kind. People who we like who must be allowed to keep their funding first, before we let some newbie get any sniff of a grant.

This, btw, is a place where the focus must land squarely on Program Officers as well. The POs have all the same biases mentioned above, of course. And their versions of the biases have meaningful impact. But when it comes to thought of "we must save our long term investigators" they have a very special role to play in this debacle. If they are not on board with the ESI worries that keep Collins and Lauer awake at night, well, they are ideally situated to sabotage the effort. Consciously or not.

So, Director Collins and Deputy Director Lauer, you have to fix study section and you have to fix Program if you expect to have any sort of lasting change.

I have only a few suggestions and none of this is a silver bullet.

I remain convinced that the only tried and true method to minimize the effects of biases (covert and overt) is the competition of opposing biases. I've remarked frequently that study sections would be improved and fairer if less-experienced investigators had more power. I think the purge of Assistant Professors effected by the last head of the CSR (Scarpa) was a mistake. I note that CSR is charged with balancing study sections on geography, sex, ethnicity, university type and even scientific subdomains...while explicitly discriminating against younger investigators. Is it any wonder if there is a problem getting the newcomers funded?

I suggest you also pay attention to fairness. I know you won't, because administrators invariably respond to a situation of perceived past injustice with "ok, that was the past and we can't do anything about it, moving forward please!". But this is going to limit your ability to shift the needle. People may not agree on what represents fair treatment but they sure as heck are motivated by fairness. Their perception of whether a new initiative is fair or unfair will tend to shape their behavior when reviewing. This can get in the way of NIH's new agenda if reviewers perceive themselves as being mistreated by it.

Many of the above mentioned reviewer quirks are hardened by acculturation. PIs who are asked to serve on study section have been through the study section wringer as newbies. They are susceptible to the idea that it is fair if the next generation has it just about as hard as they did and that it is unfair if newbies these days are given a cake walk. Particularly, if said established investigators feel like they are still struggling. Ahem. It may not seem logical but it is simple psychology. I anticipate that the "Early Established Investigator" category is going to suffer the same fate as the ESI category. Scores will worsen, compared to pre-EEI days. Some of this will be the previously mentioned tracking of scores to the perceived payline. But some of this will be people**** who missed the ESI assistance who feel that it is unfair that the generation behind them gets yet another handout to go along with the K99/R00 and ESI plums. The intent to stabilize the careers of established investigators is a good one. But limiting this to "early" established investigators, i.e., those who already enjoyed the ESI era, is a serious mistake.

I think Lauer is either aware, or verging on awareness, of something that I've mentioned repeatedly on this blog. I.e. that a lot of the pressure on the grant system- increasing numbers of applications, PIs seemingly applying greedily for grants when already well funded, they revision queuing traffic pattern hold - comes from a vicious cycle of the attempt to maintain stable funding. When, as a VeryEstablished colleague put it to me suprisingly recently "I just put in a grant when I need another one and it gets funded" is the expected value, PIs can be efficient with their grant behavior. If they need to put in eight proposals to have a decent chance of one landing, they do that. And if they need to start submitting apps 2 years before they "need" one, the randomness is going to mean they seem overfunded now and again. This applies to everyone all across the NIH system. Thinking that it is only those on their second round of funding that have this stability problem is a huge mistake for Lauer and Collins to be making. And if you stabilize some at the expense of others, this will not be viewed as fair. It will not be viewed as shared pain.

If you can't get more people on board with a mission of shared sacrifice, or unshared sacrifice for that matter, then I believe NIH will continue to wring its hands about the fate of new investigators for another forty years. There are too many applicants for too few funds. It amps up the desperation and amps up the biases for and against. It decreases the resistance of peer reviewers to do anything to Julia that they expect might give a tiny boost to the applications of them and theirs. You cannot say "do better" and expect reviewers to change, when the power of the grant game contingencies is so overwhelming for most of us. You cannot expect program officers who still to this day appear entirely clueless about they way things really work in extramural grant-funded careers to suddenly do better because you are losing sleep. You need to delve into these psychologies and biases and cultures and actually address them.

I'll leave you with an exhortation to walk the earth, like Caine. I've had the opportunity to watch some administrative frustration, inability and nervousness verging on panic in the past couple of years that has brought me to a realization. Management needs to talk to the humblest of their workforce instead of the upper crust. In the case of the NIH, you need to stop convening preening symposia from the usual suspects, taking the calls of your GlamHound buddies and responding only to reps of learn-ed societies. Walk the earth. Talk to real applicants. Get CSR to identify some of your most frustrated applicants and see what is making them fail. Find out which of the apparently well-funded applicants have to work their tails off to maintain funding. Compare and contrast to prior eras. Ask everyone what it would take to Fix the NIH.

Of course this will make things harder for you in the short term. Everyone perceives the RealProblem as that guy, over there. And the solutions that will FixTheNIH are whatever makes their own situation easier.

But I think you need to hear this. You need to hear the desperation and the desire most of us have simply to do our jobs. You need to hear just how deeply broken the NIH award system is for everyone, not just the ESI and EEI category.

PS. How's it going solving the problem identified by Ginther? We haven't seen any data lately but at last check everything was as bad as ever so...

PPS. Are you just not approving comments on your blog? Or is this a third rail issue nobody wants to comment on?
__
*I make fun of the "sudden realization" because it took me about 2 h of my very first study section meeting ever to realize that "New Investigator" checkbox applicants from genuine newbies did very poorly and all of these were being scooped up by very well established and accomplished investigators who simply hadn't been NIH funded. Perhaps they were from foreign institutions, now hired in the US. Or perhaps lived on NSF or CDC or DOD awards. The idea that it took NIH something like 8-10 years to realize this is difficult to stomach.

**The R29 was crippled in terms of budget, btw. and had other interesting features.

***lolsob

****Yep, that would be my demographic.

12 responses so far

Older posts »