Archive for the 'Fixing the NIH' category

Another day, another report on the postdocalypse

As mentioned in Science, a new report from the US Academies of Sciences, Engineering, and Medicine have deduced we have a problem with too many PhDs and not enough of the jobs that they want.

The report responds to many years of warning signs that the U.S. biomedical enterprise may be calcifying in ways that create barriers for the incoming generation of researchers. One of the biggest challenges is the gulf between the growing number of young scientists who are qualified for and interested in becoming academic researchers and the limited number of tenure-track research positions available. Many new Ph.D.s spend long periods in postdoctoral positions with low salaries, inadequate training, and little opportunity for independent research. Many postdocs pursue training experiences expecting that they will later secure an academic position, rather than pursuing a training experience that helps them compete for the range of independent careers available outside of academia, where the majority will be employed. As of 2016, for those researchers who do transition into independent research positions, the average age for securing their first major NIH independent grant is 43 years old, compared to 36 years old in 1980.

No mention (in the executive summary / PR blurb) that the age of first R01 has been essentially unchanged for a decade despite the NIH ESI policy and the invention of the K99 which is limited by years-since-PhD.

No mention of the reason that we have so many postdocs, which is the uncontrolled production of ever more PhDs.

On to the actionable bullet points that interest me.

Work with the National Institutes of Health to increase the number of individuals in staff scientist positions to provide more stable, non-faculty research opportunities for the next generation of researchers. Individuals on a staff scientist track should receive a salary and benefits commensurate with their experience and responsibilities.

This is a recommendation for research institutions but we all need to think about this. The NCI launched the R50 mechanism in 2016 and they have 49 of them on the books at the moment. I had some thoughts on why this is a good idea here and here. The question now, especially for those in the know with cancer research, is whether this R50 is being used to gain stability and independence for the needy awardee or whether it is just further larding up the labs of Very Important Cancer PIs.

Expand existing awards or create new competitive awards for postdoctoral researchers to advance their own independent research and support professional development toward an independent research career. By July 1, 2023, there should be a fivefold increase in the number of individual research fellowship awards and career development awards for postdoctoral researchers granted by NIH.

As we know the number of NIH fellowships has remained relatively fixed relative to the huge escalation of "postdocs" funded on research grant mechanisms. We really don't know the degree to which independent fellowships simply annoint the chosen (population wise) versus aid the most worthy and deserving candidates to stand out. Will quintupling the F32s magically make more faculty slots available? I tend to think not.

As we know, if you really want to grease the skids to faculty appointment the route is the K99/R00 or basically anything that means the prospective hire " comes with money". Work on that, NIH. Quintuple the K99s, not the F32s. And hand out more R03 or R21 or invent up some other R-mechanism that prospective faculty can apply for in place of "mentored" K awards. I just had this brainstorm. R-mechs (any really) that get some cutesy acronym (like B-START) and can be applied for by basically any non-faculty person from anywhere. Catch is, it works like the R00 part of the K99/R00. Only awarded upon successful competition for a faculty job and the offer of a competitive startup.

Ensure that the duration of all R01 research grants supporting early-stage investigators is no less than five years to enable the establishment of resilient independent research programs.

Sure. And invent up some "next award" special treatment for current ESI. and then a third-award one. and so on.

Or, you know, fix the problem for everyone which is that too many mouths at the trough have ruined the cakewalk that experienced investigators had during the eighties.

Phase in a cap – three years suggested – on salary support for all postdoctoral researchers funded by NIH research project grants (RPGs). The phase-in should occur only after NIH undertakes a robust pilot study of sufficient size and duration to assess the feasibility of this policy and provide opportunities to revise it. The pilot study should be coupled to action on the previous recommendation for an increase in individual awards.

This one got the newbie faculty all het up on the twitters.


being examples if you are interested.

They are, of course, upset about two things.

First, "the person like me". Which of course is what drives all of our anger about this whole garbage fire of a career situation that has developed. You can call it survivor guilt, self-love, arrogance, whatever. But it is perfectly reasonable that we don't like the Man doing things that mean people just like us would have washed out. So people who were not super stars in 3 years of postdoc'ing are mad.

Second, there's a hint of "don't stop the gravy train just as I passed your damn insurmountable hurdle". If you are newb faculty and read this and get all angree and start telling me how terrible I need to sit down an introspect a bit, friend. I can wait.

New faculty are almost universally against my suggestion that we all need to do our part and stop training graduate students. Less universally, but still frequently, against the idea that they should start structuring their career plans for a technician-heavy, trainee-light arrangement. With permanent career employees that do not get changed out for new ones every 3-5 years like leased Priuses either.

Our last little stupid poll confirmed that everyone things 3-5 concurrent postdocs is just peachy for even the newest lab and gee whillikers where are they to come from?

This new report will go nowhere, just like all the previous ones that reach essentially the same conclusion and make similar recommendations. Because it is all about the

1) Mouths at the trough.
2) Available slops.

We continue to breed more mouths PHDs.

And the American taxpayers, via their duly appointed representatives in Congress, show no interest in radically increasing the budget for slops science.

And even if Congress trebled or quintupled the NIH budget, all evidence suggests we'd just to the same thing all over again. Mint more PhDs like crazee and wonder in another 10-15 years why careers still suck.

60 responses so far

NIH to crack down on violations of confidential peer review

Mar 30 2018 Published by under Fixing the NIH, NIH, NIH funding

Nakamura is quoted in a recent bit in Science by Jeffrey Brainard.

I'll get back to this later but for now consider it an open thread on your experiences. (Please leave off the specific naming unless the event got published somewhere.)

I have twice had other PIs tell me they reviewed my grant. I did not take it as any sort of quid pro quo beyond *maybe* a sort of "I wasn't the dick reviewer" sort of thing. In both cases I barely acknowledged and tried to move along. These were both scientists that I like both professionally and personally so I assume I already have some pro-them bias. Obviously the fact these people occurred on the review roster, and that they have certain expertise, made them top suspects in my mind anyway.


“We hope that in the next few months we will have several cases” of violations that can be shared publicly, Nakamura told ScienceInsider. He said these cases are “rare, but it is very important that we make it even more rare.”

Naturally we wish to know how "rare" and what severity of violation he means.

Nakamura said. “There was an attempt to influence the outcome of the review,” he said. The effect on the outcome “was sufficiently ambiguous that we felt it was necessary to redo the reviews.”

Hmmm. "Ambiguous". I mean, if there is ever *any* contact from an applicant PI to a reviewer on the relevant panel it could be viewed as an attempt to influence outcome. Even an invitation to give a seminar or invitation to join a symposium panel proposal could be viewed as currying favor. Since one never knows how an implicit or explicit bias is formed, how would it ever be anything other than ambiguous? But if this is something clearly actionable by the NIH doesn't it imply some harder evidence? A clearer quid pro quo?

Nakamura also described the types of violations of confidentiality NIH has detected. They included “reciprocal favors,” he said, using a term that is generally understood to mean a favor offered by a grant applicant to a reviewer in exchange for a favorable evaluation of their proposal.

I have definitely heard a few third hand reports of this in the past. Backed up by a forwarded email* in at least one case. Wonder if it was one of these type of cases?

Applicants also learned the “initial scores” they received on a proposal, Nakamura said, and the names of the reviewers who had been assigned to their proposal before a review meeting took place.

I can imagine this happening** and it is so obviously wrong, even if it doesn't directly influence the outcome for that given grant. I can, however, see the latter rationale being used as self-excuse. Don't.

Nakamura said. “In the past year there has been an internal decision to pursue more cases and publicize them more.” He would not say what triggered the increased oversight, nor when NIH might release more details.

This is almost, but not quite, an admission that NIH is vaguely aware of a ground current of violations of the confidentiality of review. And that they also are aware that they have not pursued such cases as deeply as they should. So if any of you have ever notified an SRO of a violation and seen no apparent result, perhaps you should be heartened.

oh and one last thing:

In one case, Nakamura said, a scientific review officer—an NIH staff member who helps run a review panel—inappropriately changed the score that peer reviewers had given a proposal.

SROs and Program Officers may also have dirt on their hands. Terrifying prospect for any applicant. And I rush to say that I have always seen both SROs and POs that I have dealt with directly to be upstanding people trying to do their best to ensure fair treatment of grant applications. I may disagree with their approaches and priorities now and again but I've never had reason to suspect real venality. However. Let us not be too naive, eh?

*anyone bold enough to put this in email....well I would suspect this is chronic behavior from this person?

**we all want to bench race the process and demystify it for our friends. I can see many entirely well-intentioned reasons someone would want to tell their friend about the score ranges. Maybe even a sentiment that someone should be warned to request certain reviewers be excluded from reviewing their proposals in the future. But..... no. No, no, no. Do not do this.

29 responses so far

Delay, delay, delay

I'm not in favor of policies that extend the training intervals. Pub requirements for grad students is a prime example. The "need" to do two 3-5 year postdocs to be competitive. These are mostly problems made by the Professortariat directly.

But NIH has slipped into this game. Postdocs "have" to get evidence of funding, with F32 NRSAs and above all else the K99 featuring as top plums.

Unsurprisingly the competition has become fierce for these awards. And as with R-mechs this turns into the traffic pattern queue of revision rounds. Eighteen months from first submission to award if you are lucky.

Then we have the occasional NIH Institute which adds additional delaying tactics. "Well, we might fund your training award next round, kid. Give it another six months of fingernail biting."

We had a recent case on the twttrs where a hugely promising young researcher gave up on this waiting game, took a job in home country only to get notice that the K99 would fund. Too late! We (MAGA) lost them.

I want NIH to adopt a "one and done" policy for all training mechanisms. If you get out-competed for one, move along to the next stage.

This will decrease the inhumane waiting game. It will hopefully open up other opportunities (transition to quasi-faculty positions that allow R-mech or foundation applications) faster. And overall speed progress through the stages, yes even to the realization that an alternate path is the right path.

29 responses so far

Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

Mar 13 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding, Peer Review

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I'd like to see more chit chat on the implicit question from the last post.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

*ok, maybe sometimes. but always?

13 responses so far

CSR and critics need to get together on their grant review studies

Mar 12 2018 Published by under Fixing the NIH, NIH, Peer Review

I was critical of a recent study purporting to show that NIH grant review is totally random because of structural flaws that could not have been designed more precisely to reach a foregone conclusion.

I am also critical of CSR/NIH self-studies. These are harder to track because they are not always published or well promoted. We often only get wind of them when people we know are invited to participate as reviewers. Often the results are not returned to the participants or are returned with an explicit swearing to secrecy.

I've done a couple of these self-study reviews for CSR.

I am not impressed by their designs either. Believe me.

As far as I've heard or experienced, most (all) of these CSR studies have the same honking flaw of restricted range. Funded applications only.

Along with other obscure design choices that seem to miss the main point*. One review pitted apps funded from closely-related sections against each other. ....except "closely-related" did not appear that close to me. It was more a test of whatever historical accident made CSR cluster those study sections or perhaps a test of mission drift. A better way would have been to cluster study sections to which the same PI submits. Or by assigned PO maybe? By a better key word cluster analysis?

Anyway, the CSR designs are usually weird when I hear about them. They never want to convene multiple panels of very similar reviewers to review the exact same pile of apps in real time. Reporting on their self-studies is spotty at best.

This appears to my eye to be an attempt to service a single political goal. I.e. "Maintain the ability to pretend to Congress that grant review selects only the most meritorious applications for funding with perfect fidelity".

Th critics, as we've seen, do the opposite. Their designs are manipulated to provide a high probability of showing NIH grant review is utterly unreliable and needs to be dismantled and replaced.

Maybe the truth lies somewhere in the middle? And if these forces would combine to perform some better research we could perhaps better trust jointly proposed solutions.


*I include the "productivity" data mining. NIH also pulls some sketchy stuff with these studies. Juking it carefully to support their a priori plans, rather than doing the study first and changing policy after.

3 responses so far

Agreement among NIH grant reviewers

Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:

We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

emphasis added.

This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. "Aha!" cry the disgruntled applicants, "This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah."

A smaller set of voices expressed perplexed confusion. "Weird", we say, "but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole."

So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.

This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.

In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.

So. The authors claims are bogus. Ridiculously so. They did not "replicate" the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don't have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the "top-[insert favored value of 2-3 times the current paylines]" scoring grants are all worthy of funding and have very little objective space between them.

Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don't understand these people.
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018,

*And we're not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow

21 responses so far

Undue influence of frequent NIH grant reviewers

Feb 07 2018 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism, NIH funding

A quotation

Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.

referencing this paper on peer review appeared in a blog post by Gary McDowell. It caught my eye when referenced on the twitts.

The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.

NIH Grant review is an inherently conservative process.

The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.

The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.

Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.

This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.

So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.

When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.

My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.

We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!

At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?

I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.

At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?

*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.

**[cough]Ginther[cough, cough]

***No idea what this threshold should be, btw. But I think there is one.

18 responses so far

Grant Supplements and Diversity Efforts

The NIH announced an "encouragement" for NIMH BRAINI PIs to apply for the availability of Research Supplements to Promote Diversity in Health-Related Research (Admin Supp).

Administrative supplements for those who are unaware, are extra amounts of money awarded to an existing NIH grant. These are not reviewed by peer reviewers in a competitive manner. The decision lies entirely with Program Staff*. The Diversity supplement program in my experience and understanding amounts to a fellowship- i.e., mostly just salary support - for a qualifying trainee. (Blog note: Federal rules on underrepresentation apply....this thread will not be a place to argue about who is properly considered an underrepresented individual, btw.) The BRANI-directed the encouragement lays out the intent:

The NIH diversity supplement program offers an opportunity for existing BRAIN awardees to request additional funds to train and mentor the next generation of researchers from underrepresented groups who will contribute to advancing the goals of the BRAIN Initiative. Program Directors/Principal Investigators (PDs/PIs) of active BRAIN Initiative research program grants are thus encouraged to identify individuals from groups nationally underrepresented to support and mentor under the auspices of the administrative supplement program to promote diversity. Individuals from the identified groups are eligible throughout the continuum from high school to the faculty level. The activities proposed in the supplement application must fall within the scope of the parent grant, and both advance the objectives of the parent grant and support the research training and professional development of the supplement candidate. BRAIN Initiative PDs/PIs are strongly encouraged to incorporate research education activities that will help prepare the supplement candidate to conduct rigorous research relevant to the goals of the BRAIN Initiative

I'll let you read PA-16-288 for the details but we're going to talk generally about the Administrative Supplement process so it is worth reprinting this bit:

Administrative supplement, the funding mechanism being used to support this program, can be used to cover cost increases that are associated with achieving certain new research objectives, as long as the research objectives are within the original scope of the peer reviewed and approved project, or the cost increases are for unanticipated expenses within the original scope of the project. Any cost increases need to result from making modifications to the project that would increase or preserve the overall impact of the project consistent with its originally approved objectives and purposes.

Administrative supplements come in at least three varieties, in my limited experience. [N.b. You can troll RePORTER for supplements using "S1" or "S2" in the right hand field for the Project Number / Activity Code search limiter. Unfortunately I don't think you get much info on what the supplement itself is for.] The support for underrepresented trainees is but one category. There are also topic-directed FOAs that are issued now and again because a given I or C wishes to quickly spin up research on some topic or other. Sex differences. Emerging health threats. Etc. Finally, there are those one might categorize within the "unanticipated expenses" and "increase or preserve the overall impact of the project" clauses in the block I've quoted above.

I first became aware of the Administrative Supplement in this last context. I was OUTRAGED, let me tell you. It seemed to be a way by which the well-connected and highly-established use their pet POs to enrich their programs beyond what they already had via competition. Some certain big labs seemed to be constantly supplemented on one award or other. Me, I sure had "unanticipated expenses" when I was just getting started. I had plenty of things that I could have used a few extra modules of cash to pay for to enhance the impact of my projects. I did not have any POs looking to hand me any supplements unasked and when I hinted very strongly** about my woes there was no help to be had***. I did not like administrative supplements as practiced one bit. Nevertheless, I was young and still believed in the process. I believed that I needn't pursue the supplement avenue too hard because I was going to survive into the mid career stretch and just write competing apps for what I needed. God, I was naive.

Perhaps. Perhaps if I'd fought harder for supplements they would have been awarded. Or maybe not.

When I became aware of the diversity supplements, I became an instant fan. This was much more palatable. It meant that at any time a funded PI found a likely URM recruit to science, they could get the support within about 6 weeks. Great for summer research experiences for undergrads, great for unanticipated postdocs. This still seems like a very good thing to me. Good for the prospective trainees. Good for diversity-in-science goals.

The trouble is that from the perspective of the PIs in the audience, this is just another rich-get-richer scheme whereby free labor is added to the laboratory accounts of the already advantaged "haves" of the NIH game. Salary is freed up on the research grants to spend on more toys, reagents or yet another postdoc. This mechanism is only available to a PI who has research grant funding that has a year or more left to run. Since it remains an administrative decision it is also subject to buddy-buddy PI/PO relationship bias. Now, do note that I have always heard from POs in my ICs of closest concern that they "don't expend all the funds allocated" for these URM supplements. I don't know what to make of that but I wouldn't be surprised in the least if any PI with a qualified award, who asks for support of a qualified individual gets one. That would take the buddy/buddy part out of the equation for this particular type of administrative supplement.

It took awhile for me to become aware of the FOA version of the administrative supplement whereby Program was basically issuing a cut-rate RFA. The rich still get richer but at least there is a call for open competition. Not like the first variety I discussed whereby it seems like only some PIs, but not others, are even told by the PO that a supplement might be available. This seems slightly fairer to me although again, you have to be in the funded-PI club already to take advantage

There are sometimes competing versions of the FOA for a topic-based supplement issued as well. In one case I am familiar with, both types were issued simultaneously. I happen to know quite a bit about that particular scenario and it was interesting to see the competing variety actually were quite bad. I wished I'd gone in for the competing ones instead of the administrative variety****, let me tell you.

The primary advantage of the administrative supplement to Program, in my viewing, is that it is fast. No need to wait for the grant review cycle. These and the competing supplements are also cheap and can be efficient, because of leverage from the activities and capabilities under the already funded award.

As per usual, I have three main goals with this post. First, if you are an underrepresented minority trainee it is good to be aware of this. Not all PIs are and not all think about it. Not to mention they don't necessarily know if you qualify for one of these. I'd suggest bringing it up in conversations with a prospective lab you wish to join. Second, if you are a noob PI I encourage you to be aware of the supplement process and to take advantage of it as you might.

Finally, DearReader, I turn to you and your views on Administrative Supplements. Good? Bad? OUTRAGE?

COI DISCLAIMER: I've benefited from administrative supplements under each of the three main categories I've outlined and I would certainly not turn up my nose at any additional ones in the future.

*I suppose it is not impossible that in some cases outside input is solicited.

**complained vociferously

***I have had a few enraging conversations long after the fact with POs who said things like "Why didn't you ask for help?" in the wake of some medium sized disaster with my research program. I keep to myself the fact that I did, and nobody was willing to go to bat for me until it was too late but...whatevs.

****I managed to get all the way to here without emphasizing that even for the administrative supplements you have to prepare an application. It might not be as extensive as your typical competing application but it is much more onerous than Progress Report. Research supplements look like research grants. Fellowship-like supplements look like fellowships complete with training plan.

20 responses so far

What if it were about deserve?

Oct 26 2016 Published by under Fixing the NIH

Imagine that the New Investigator status (no prior service as PI of major NIH grant) required an extra timeline document? This would be a chronology of the PI's program to date with emphasis on funding (startup, institutional grants, foundation), how publications were generated, and the PI's scrambling. Another part would focus on grants submitted, score outcomes, revisions, how preliminary data was generated, etc.

Would this improve the way the NIH awards grants?

(Keep in mind the NIH has wrung its hands about the dismal fate of the not-yet-funded for many decades and created numerous "fixes" over the years.)

32 responses so far

NINDS tweaks their approach to the F32 / NRSA

NOT-NS-17-002 indicates that NINDS will no longer participate in the NIH-wide parent F32/NRSA funding opportunity because they will be customizing their approach.


As previously described in NOT-NS-16-012 and NOT-NS-16-013, NINDS is restructuring its funding support for postdoctoral researchers.  Beginning with the December 8, 2016 due date, research training support for postdoctoral fellows under the F32 activity code will be available through NINDS using PAR-16-458 "NINDS Ruth L. Kirschstein National Research Service Award (NRSA) for Training of Postdoctoral Fellows (F32)."  This NINDS F32 will support postdocs who are within the first 3 years of research training in the sponsor's laboratory, and includes several other key differences from the parent F32. Most notably, applicants are only eligible for the NINDS F32 prior to starting, or within the first 12 months of starting, their postdoctoral training in the sponsor's laboratory or research environment. Because of the very early application, no preliminary data are expected.  It is anticipated that another Funding Opportunity Announcement for postdocs, which utilizes the K01 activity code, will be published in time for the February 12, 2017 initial receipt date. This will be available to applicants in their second through fourth year of cumulative postdoctoral research experience (see NOT-NS-16-013). 

I remember the initial troll on this but managed to overlook the part where they were going to have a new K01 announcement focused on later-stage postdocs.

I like this, actually. We've gotten into a situation where F32s are stuck in the escalating-expectations holding pattern of endless revisions and resubmissions lately. I just don't see the point of a 3rd year postdoc writing for "training" support that will only arrive in year 4 or 5. Particularly when at that point the postdocs who are gunning hard for a faculty research type job should be focusing on the K99/R01. This has been a waste of time, let the awardees languish for extra time so that they get at least a year or two on the F32 and make a mockery of the idea of the F32.

I am likewise encouraged that instead of leaving the 2+ year postdocs at the tender mercies of the K99/R00 process, NINDS has a fill-in with a K01. I note that their warning notice on this looks good.

The NINDS K01 is intended for candidates with a Ph.D. or equivalent research doctoral degree. Candidates will be eligible to apply for the K01 anytime within the second through fourth year of cumulative mentored, postdoctoral research experience, and may be supported by the NINDS K01 within the first 6 years of cumulative postdoctoral research experience. Successful K01 applications will be designed to facilitate the continuation of outstanding, innovative projects, combined with career development activities that will prepare outstanding postdoctoral, mentored investigators for an independent research career. The K01 application will describe a project that, as demonstrated by preliminary data collected by the applicant, holds promise to result in highly significant results and future discoveries. The K01 candidate will continue to be guided by a postdoctoral mentor, but will be primarily responsible for oversight and conduct of the research project. By the end of the proposed K01 award period, the candidate will be poised to begin an independent research career and will have a well-developed, highly significant project that he/she can take with him/her to an independent research position.

The devil, of course, is in the details. In my most frequent experience, the K01 tends to be won by people already in quasi-faculty positions. People who have been promoted to "Instructor" or "Assistant Research Project Quasi-faculty but not really Scientist" or whatever word salad title your University prefers. I do not see this being favored for award to any old run of the mill year 2 postdoc. Maybe your frame of reference differs, DearReader?

It will be interesting to see how this is used in practice. Will it only be for the people who just-miss on the K99/R00? Or will it occupy the place currently occupied by the F32 with successful applicants having 2-3 years of postdoc work under their belt before applying? [Mayhap these are the same thing these days?]

But I digress.

The most pressing issue of the day is whether the NINDS will succeed in funding 1) a substantial number of F32s from applicants who are finishing their graduate studies and 2) from first year postdocs without much Preliminary Data in the application.

In my estimation if they don't get to at least 50% of awards on #1, this isn't working.

I also predict that the #2 scenario is going to produce a lot of applications with lots of Preliminary Data, just stuff that wasn't completed directly by the applicant herself.

Thoughts folks? Would you like to see this extended to your favorite ICs?

29 responses so far

Older posts »