Archive for the 'Fixing the NIH' category

Delay, delay, delay

I'm not in favor of policies that extend the training intervals. Pub requirements for grad students is a prime example. The "need" to do two 3-5 year postdocs to be competitive. These are mostly problems made by the Professortariat directly.

But NIH has slipped into this game. Postdocs "have" to get evidence of funding, with F32 NRSAs and above all else the K99 featuring as top plums.

Unsurprisingly the competition has become fierce for these awards. And as with R-mechs this turns into the traffic pattern queue of revision rounds. Eighteen months from first submission to award if you are lucky.

Then we have the occasional NIH Institute which adds additional delaying tactics. "Well, we might fund your training award next round, kid. Give it another six months of fingernail biting."

We had a recent case on the twttrs where a hugely promising young researcher gave up on this waiting game, took a job in home country only to get notice that the K99 would fund. Too late! We (MAGA) lost them.

I want NIH to adopt a "one and done" policy for all training mechanisms. If you get out-competed for one, move along to the next stage.

This will decrease the inhumane waiting game. It will hopefully open up other opportunities (transition to quasi-faculty positions that allow R-mech or foundation applications) faster. And overall speed progress through the stages, yes even to the realization that an alternate path is the right path.

29 responses so far

Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

Mar 13 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding, Peer Review

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I'd like to see more chit chat on the implicit question from the last post.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

__
*ok, maybe sometimes. but always?

13 responses so far

CSR and critics need to get together on their grant review studies

Mar 12 2018 Published by under Fixing the NIH, NIH, Peer Review

I was critical of a recent study purporting to show that NIH grant review is totally random because of structural flaws that could not have been designed more precisely to reach a foregone conclusion.

I am also critical of CSR/NIH self-studies. These are harder to track because they are not always published or well promoted. We often only get wind of them when people we know are invited to participate as reviewers. Often the results are not returned to the participants or are returned with an explicit swearing to secrecy.

I've done a couple of these self-study reviews for CSR.

I am not impressed by their designs either. Believe me.

As far as I've heard or experienced, most (all) of these CSR studies have the same honking flaw of restricted range. Funded applications only.

Along with other obscure design choices that seem to miss the main point*. One review pitted apps funded from closely-related sections against each other. ....except "closely-related" did not appear that close to me. It was more a test of whatever historical accident made CSR cluster those study sections or perhaps a test of mission drift. A better way would have been to cluster study sections to which the same PI submits. Or by assigned PO maybe? By a better key word cluster analysis?

Anyway, the CSR designs are usually weird when I hear about them. They never want to convene multiple panels of very similar reviewers to review the exact same pile of apps in real time. Reporting on their self-studies is spotty at best.

This appears to my eye to be an attempt to service a single political goal. I.e. "Maintain the ability to pretend to Congress that grant review selects only the most meritorious applications for funding with perfect fidelity".

Th critics, as we've seen, do the opposite. Their designs are manipulated to provide a high probability of showing NIH grant review is utterly unreliable and needs to be dismantled and replaced.

Maybe the truth lies somewhere in the middle? And if these forces would combine to perform some better research we could perhaps better trust jointly proposed solutions.

__

*I include the "productivity" data mining. NIH also pulls some sketchy stuff with these studies. Juking it carefully to support their a priori plans, rather than doing the study first and changing policy after.

3 responses so far

Agreement among NIH grant reviewers

Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:

We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

emphasis added.

This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. "Aha!" cry the disgruntled applicants, "This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah."

A smaller set of voices expressed perplexed confusion. "Weird", we say, "but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole."

So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.

This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.

In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.

So. The authors claims are bogus. Ridiculously so. They did not "replicate" the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don't have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the "top-[insert favored value of 2-3 times the current paylines]" scoring grants are all worthy of funding and have very little objective space between them.

Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don't understand these people.
__
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018, https://doi.org/10.1073/pnas.1714379115

*And we're not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow

22 responses so far

Undue influence of frequent NIH grant reviewers

Feb 07 2018 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism, NIH funding

A quotation

Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.

referencing this paper on peer review appeared in a blog post by Gary McDowell. It caught my eye when referenced on the twitts.

The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.

NIH Grant review is an inherently conservative process.

The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.

The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.

Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.

This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.

So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.

When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.

My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.

We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!

At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?

I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.

At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?

___
*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.

**[cough]Ginther[cough, cough]

***No idea what this threshold should be, btw. But I think there is one.

18 responses so far

Grant Supplements and Diversity Efforts

The NIH announced an "encouragement" for NIMH BRAINI PIs to apply for the availability of Research Supplements to Promote Diversity in Health-Related Research (Admin Supp).

Administrative supplements for those who are unaware, are extra amounts of money awarded to an existing NIH grant. These are not reviewed by peer reviewers in a competitive manner. The decision lies entirely with Program Staff*. The Diversity supplement program in my experience and understanding amounts to a fellowship- i.e., mostly just salary support - for a qualifying trainee. (Blog note: Federal rules on underrepresentation apply....this thread will not be a place to argue about who is properly considered an underrepresented individual, btw.) The BRANI-directed the encouragement lays out the intent:

The NIH diversity supplement program offers an opportunity for existing BRAIN awardees to request additional funds to train and mentor the next generation of researchers from underrepresented groups who will contribute to advancing the goals of the BRAIN Initiative. Program Directors/Principal Investigators (PDs/PIs) of active BRAIN Initiative research program grants are thus encouraged to identify individuals from groups nationally underrepresented to support and mentor under the auspices of the administrative supplement program to promote diversity. Individuals from the identified groups are eligible throughout the continuum from high school to the faculty level. The activities proposed in the supplement application must fall within the scope of the parent grant, and both advance the objectives of the parent grant and support the research training and professional development of the supplement candidate. BRAIN Initiative PDs/PIs are strongly encouraged to incorporate research education activities that will help prepare the supplement candidate to conduct rigorous research relevant to the goals of the BRAIN Initiative

I'll let you read PA-16-288 for the details but we're going to talk generally about the Administrative Supplement process so it is worth reprinting this bit:

Administrative supplement, the funding mechanism being used to support this program, can be used to cover cost increases that are associated with achieving certain new research objectives, as long as the research objectives are within the original scope of the peer reviewed and approved project, or the cost increases are for unanticipated expenses within the original scope of the project. Any cost increases need to result from making modifications to the project that would increase or preserve the overall impact of the project consistent with its originally approved objectives and purposes.

Administrative supplements come in at least three varieties, in my limited experience. [N.b. You can troll RePORTER for supplements using "S1" or "S2" in the right hand field for the Project Number / Activity Code search limiter. Unfortunately I don't think you get much info on what the supplement itself is for.] The support for underrepresented trainees is but one category. There are also topic-directed FOAs that are issued now and again because a given I or C wishes to quickly spin up research on some topic or other. Sex differences. Emerging health threats. Etc. Finally, there are those one might categorize within the "unanticipated expenses" and "increase or preserve the overall impact of the project" clauses in the block I've quoted above.

I first became aware of the Administrative Supplement in this last context. I was OUTRAGED, let me tell you. It seemed to be a way by which the well-connected and highly-established use their pet POs to enrich their programs beyond what they already had via competition. Some certain big labs seemed to be constantly supplemented on one award or other. Me, I sure had "unanticipated expenses" when I was just getting started. I had plenty of things that I could have used a few extra modules of cash to pay for to enhance the impact of my projects. I did not have any POs looking to hand me any supplements unasked and when I hinted very strongly** about my woes there was no help to be had***. I did not like administrative supplements as practiced one bit. Nevertheless, I was young and still believed in the process. I believed that I needn't pursue the supplement avenue too hard because I was going to survive into the mid career stretch and just write competing apps for what I needed. God, I was naive.

Perhaps. Perhaps if I'd fought harder for supplements they would have been awarded. Or maybe not.

When I became aware of the diversity supplements, I became an instant fan. This was much more palatable. It meant that at any time a funded PI found a likely URM recruit to science, they could get the support within about 6 weeks. Great for summer research experiences for undergrads, great for unanticipated postdocs. This still seems like a very good thing to me. Good for the prospective trainees. Good for diversity-in-science goals.

The trouble is that from the perspective of the PIs in the audience, this is just another rich-get-richer scheme whereby free labor is added to the laboratory accounts of the already advantaged "haves" of the NIH game. Salary is freed up on the research grants to spend on more toys, reagents or yet another postdoc. This mechanism is only available to a PI who has research grant funding that has a year or more left to run. Since it remains an administrative decision it is also subject to buddy-buddy PI/PO relationship bias. Now, do note that I have always heard from POs in my ICs of closest concern that they "don't expend all the funds allocated" for these URM supplements. I don't know what to make of that but I wouldn't be surprised in the least if any PI with a qualified award, who asks for support of a qualified individual gets one. That would take the buddy/buddy part out of the equation for this particular type of administrative supplement.

It took awhile for me to become aware of the FOA version of the administrative supplement whereby Program was basically issuing a cut-rate RFA. The rich still get richer but at least there is a call for open competition. Not like the first variety I discussed whereby it seems like only some PIs, but not others, are even told by the PO that a supplement might be available. This seems slightly fairer to me although again, you have to be in the funded-PI club already to take advantage

There are sometimes competing versions of the FOA for a topic-based supplement issued as well. In one case I am familiar with, both types were issued simultaneously. I happen to know quite a bit about that particular scenario and it was interesting to see the competing variety actually were quite bad. I wished I'd gone in for the competing ones instead of the administrative variety****, let me tell you.

The primary advantage of the administrative supplement to Program, in my viewing, is that it is fast. No need to wait for the grant review cycle. These and the competing supplements are also cheap and can be efficient, because of leverage from the activities and capabilities under the already funded award.

As per usual, I have three main goals with this post. First, if you are an underrepresented minority trainee it is good to be aware of this. Not all PIs are and not all think about it. Not to mention they don't necessarily know if you qualify for one of these. I'd suggest bringing it up in conversations with a prospective lab you wish to join. Second, if you are a noob PI I encourage you to be aware of the supplement process and to take advantage of it as you might.

Finally, DearReader, I turn to you and your views on Administrative Supplements. Good? Bad? OUTRAGE?

__
COI DISCLAIMER: I've benefited from administrative supplements under each of the three main categories I've outlined and I would certainly not turn up my nose at any additional ones in the future.

*I suppose it is not impossible that in some cases outside input is solicited.

**complained vociferously

***I have had a few enraging conversations long after the fact with POs who said things like "Why didn't you ask for help?" in the wake of some medium sized disaster with my research program. I keep to myself the fact that I did, and nobody was willing to go to bat for me until it was too late but...whatevs.

****I managed to get all the way to here without emphasizing that even for the administrative supplements you have to prepare an application. It might not be as extensive as your typical competing application but it is much more onerous than Progress Report. Research supplements look like research grants. Fellowship-like supplements look like fellowships complete with training plan.

20 responses so far

What if it were about deserve?

Oct 26 2016 Published by under Fixing the NIH

Imagine that the New Investigator status (no prior service as PI of major NIH grant) required an extra timeline document? This would be a chronology of the PI's program to date with emphasis on funding (startup, institutional grants, foundation), how publications were generated, and the PI's scrambling. Another part would focus on grants submitted, score outcomes, revisions, how preliminary data was generated, etc.

Would this improve the way the NIH awards grants?

__
(Keep in mind the NIH has wrung its hands about the dismal fate of the not-yet-funded for many decades and created numerous "fixes" over the years.)

32 responses so far

NINDS tweaks their approach to the F32 / NRSA

NOT-NS-17-002 indicates that NINDS will no longer participate in the NIH-wide parent F32/NRSA funding opportunity because they will be customizing their approach.

 

As previously described in NOT-NS-16-012 and NOT-NS-16-013, NINDS is restructuring its funding support for postdoctoral researchers.  Beginning with the December 8, 2016 due date, research training support for postdoctoral fellows under the F32 activity code will be available through NINDS using PAR-16-458 "NINDS Ruth L. Kirschstein National Research Service Award (NRSA) for Training of Postdoctoral Fellows (F32)."  This NINDS F32 will support postdocs who are within the first 3 years of research training in the sponsor's laboratory, and includes several other key differences from the parent F32. Most notably, applicants are only eligible for the NINDS F32 prior to starting, or within the first 12 months of starting, their postdoctoral training in the sponsor's laboratory or research environment. Because of the very early application, no preliminary data are expected.  It is anticipated that another Funding Opportunity Announcement for postdocs, which utilizes the K01 activity code, will be published in time for the February 12, 2017 initial receipt date. This will be available to applicants in their second through fourth year of cumulative postdoctoral research experience (see NOT-NS-16-013). 

I remember the initial troll on this but managed to overlook the part where they were going to have a new K01 announcement focused on later-stage postdocs.

I like this, actually. We've gotten into a situation where F32s are stuck in the escalating-expectations holding pattern of endless revisions and resubmissions lately. I just don't see the point of a 3rd year postdoc writing for "training" support that will only arrive in year 4 or 5. Particularly when at that point the postdocs who are gunning hard for a faculty research type job should be focusing on the K99/R01. This has been a waste of time, let the awardees languish for extra time so that they get at least a year or two on the F32 and make a mockery of the idea of the F32.

I am likewise encouraged that instead of leaving the 2+ year postdocs at the tender mercies of the K99/R00 process, NINDS has a fill-in with a K01. I note that their warning notice on this looks good.

The NINDS K01 is intended for candidates with a Ph.D. or equivalent research doctoral degree. Candidates will be eligible to apply for the K01 anytime within the second through fourth year of cumulative mentored, postdoctoral research experience, and may be supported by the NINDS K01 within the first 6 years of cumulative postdoctoral research experience. Successful K01 applications will be designed to facilitate the continuation of outstanding, innovative projects, combined with career development activities that will prepare outstanding postdoctoral, mentored investigators for an independent research career. The K01 application will describe a project that, as demonstrated by preliminary data collected by the applicant, holds promise to result in highly significant results and future discoveries. The K01 candidate will continue to be guided by a postdoctoral mentor, but will be primarily responsible for oversight and conduct of the research project. By the end of the proposed K01 award period, the candidate will be poised to begin an independent research career and will have a well-developed, highly significant project that he/she can take with him/her to an independent research position.

The devil, of course, is in the details. In my most frequent experience, the K01 tends to be won by people already in quasi-faculty positions. People who have been promoted to "Instructor" or "Assistant Research Project Quasi-faculty but not really Scientist" or whatever word salad title your University prefers. I do not see this being favored for award to any old run of the mill year 2 postdoc. Maybe your frame of reference differs, DearReader?

It will be interesting to see how this is used in practice. Will it only be for the people who just-miss on the K99/R00? Or will it occupy the place currently occupied by the F32 with successful applicants having 2-3 years of postdoc work under their belt before applying? [Mayhap these are the same thing these days?]

But I digress.

The most pressing issue of the day is whether the NINDS will succeed in funding 1) a substantial number of F32s from applicants who are finishing their graduate studies and 2) from first year postdocs without much Preliminary Data in the application.

In my estimation if they don't get to at least 50% of awards on #1, this isn't working.

I also predict that the #2 scenario is going to produce a lot of applications with lots of Preliminary Data, just stuff that wasn't completed directly by the applicant herself.

Thoughts folks? Would you like to see this extended to your favorite ICs?

29 responses so far

Bring back the 2-3 year Developmental R01

Sep 19 2016 Published by under Fixing the NIH, NIH, NIH funding

The R21 Mechanism is called the Exploratory/Developmental mechanism. Says so right in the title.

NIH Exploratory/Developmental Research Grant Program ( Parent R21)

In the real world of NIH grant review, however, the "Developmental" part is entirely ignored in most cases. If you want a more accurate title, it should be:

NIH High Risk / High Reward Research Grant Program ( Parent R21)

This is what reviwers favor in my experiences sitting on panels and occasionally submitting an R21 app. Mine are usually more along the lines of developing a new line of research that I think is important rather than being truly "high risk/high reward".

And, as we all know, the R01 application (5 years, full modular at $250K per annum direct costs if you please) absolutely requires a ton of highly specific Preliminary Data.

So how are you supposed to Develop an idea into this highly specific Preliminary Data? Well, there's the R21, right? Says right in the title that it is Developmental.

But....it doesn't work in practice.

So the R01 is an alternative. After all it is the most flexible mechanism. You could submit an R01 for $25K direct costs for one year. You'd be nuts, but you could. Actually you could submit an R03 or R21 for one $25K module too, but with the R01 you would then have the option to put in a competitive renewal to continue the project along.

The only thing stopping this from being a thing is the study section culture that won't accept it. Me, I see a lot of advantages to using shorter (and likely smaller) R01 proposals to develop a new line of work. It is less risky than a 5 year R01, for those that focus on risk/$. It has an obvious path of continuation as a genuinely Developmental attempt. It is more flexible in scope and timing- perhaps what you really need is $100K per year for 3 years (like the old R21) for your particular type of research or job type. It doesn't come laden with quite the same "high risk, high reward" approach to R21 review that biases for flash over solid workmanlike substance.

The only way I see this working is to try it. Repeatedly. Settle in for the long haul. Craft your Specific Aims opening to explain why you are taking this approach. Take the Future Directions blurb and make it really sparkle. Think about using milestones and decision points to convince the reviewers you will cut this off at the end if it isn't turning out to be that productive. Show why your particular science, job category, institute or resources match up to this idea.

Or you could always just shout aimlessly into the ether of social media.

41 responses so far

Where the NIGMS argument doesn't add up

Jul 08 2016 Published by under Fixing the NIH, NIH Budgets and Economics

The NIGMS continues its ongoing argument for funding more labs with ever decreasing amounts of grant funding in a new Feedback Loop post.

This one focuses, yet again, on "productivity" as assessed by publication counts and (this time) citations of those publications. It is, as always, significantly flawed by ignoring the effects of Glamour publications. I've done that before and it is starting to bore me. In short, you cannot compare apples to oranges because of the immense difference in the cost of generating your average Nature paper versus a Brain Research paper. And citations don't help because getting into a Glam journal does not mean your paper will get any particular number of citations. Furthermore, there is very little chance that papers that cost 10 or 20 times more will generate ten or twenty times the citations, on average, given the skew in citation distributions and the fact that Glam journals are only hitting means in the 30-40 range. Finally, their "efficiency" measures completely ignore the tremendous inefficiencies of interrupted funding, which is a reality under the current system and also not necessarily fixed with their spread-the-wealth schemes.

The real issue of the day is the opinion of the fans of NIGMS's "conclusion*", which reads:

Overall, however, the data suggest that supporting a greater number of investigators at moderate funding levels is a better investment strategy than concentrating high amounts of funding in a smaller number of researchers.

The Sally Rockey blog entry on "mythbusting" is relevant here. As of FY2009 about 72% of NIH funded investigators had one RPG. Another 20% had two and maybe 5% had three.

That's all.

The NIGMS data analyses are big on fitting productivity lines to about the single R01 level of direct costs (~$200K per year) and showing how the productivity/cost drops off as the grant funding increases. Take a good look at the most recent analysis. Linear productivity up to $300K direct costs with the 75%ile sustained all the way to $500K. The famous original 2010 analysis by Jeremy Berg at NIGMS is pretty similar in the sense that you don't get much change in the line fit to mean publications until you get to the $600-$700K direct costs range.

There is a critical point in lining up these two bits of information which is that the NIGMS policy intent is not supported by their analysis and it can't be. One or two RPG level from Rockey's post should be interpreted in full modular R01 terms ($250K direct, usually cut to $200K, $225K direct and in NIGMS' case to 4 years by default) with a little bit of float upwards for the rare cases. Consequently, it is obvious that most NIH awardees operate in the ~$200-250K part of NIGMS' dataset. Another 20% operate in the $400-$500K direct range. In other words, well within the linear part of the productivity/cost curve.

Mean publications as represented by the 2010 Berg analysis are increasing linearly well up to the three to four grant level of $750K direct costs.

In either case, the "inefficient" grant levels are being obtained by a vanishingly small number of investigators.

Fine, screw them, right?

Sure....but this does nothing to address either the stated goal of NIGMS in hedging their bets across many labs or the goal of the unfunded, i.e., to increase their chances substantially.

A recent Mike Lauer Blog post showed that about a third of those PI's who seek RPG funding over a rolling 5 year interval achieve funding. Obviously if you take all the multi-grant PIs and cut them down to one tomorrow, you'd be able to bump funded investigators up by 15-20%, assuming the FY2009 numbers are relatively good still**. It isn't precise because if you limit the big guys to one award then these are going to drift up to $499K direct at a minimum and a lot more will have special permission to crest the $500K threshold.

There will be a temporary sigh of relief and some folks will get funded at 26%ile. Sure. And then there will be even more PIs in the game seeking funding and it will continue to be a dogfight to retain that single grant award. And the next round of newbies will face the same steep odds of entry. Maybe even steeper.

So the ONLY way for NIGMS' plan to work is to cut per-PI awards way, way down into the front part of their productivity curves. Well below the point of inflection($300-500K or even $750K depending on measure) where papers-per-grant dollar drops off the linear trend. Even the lowest estimate of $300K direct is more than one full-modular grant. It will take a limit substantially below this level*** to improve perceptions of funding ease or to significantly increase the number of funded labs.

Which makes their argument based on those trends a lie, if they truly intend it to support their "better investment strategy". Changing the number of investigators they support in any fundamental way means limiting per-PI awards to the current full modular limit (with typical reductions) at the least, and very likely substantially below this level to produce anything like a phase change.

That's fine if they want to just assert "we think everyone should only have X amount of direct costs" but it is not so fine if they argue that they have some objective, productivity-based data analysis to support their plans. Because it does not.

__
*This is actually their long standing assertion that all of these seemingly objective analyses are designed to support.

**should be ballpark, given the way Program has been preserving unfunded labs at the expense of extra awards to funded labs these days.

***I think many people arguing in favor of the NIGMS type of "small grants for all" strategy operate from the position that they personally deserve funding. Furthermore that some grant award of full modular level or slightly below is sufficient for them. Any dishonest throwaway nod to other types of research that are more expensive (as NIGMS did "We recognize that some science is inherently more expensive, for example because of the costs associated with human and animal subjects.") is not really meant or considered. This is somewhat narrow and self-involved. Try assuming that all of the two-granters in Rockey's distribution really need that amount of funding (remember the erosion of purchasing power?) and that puts it at more like 92% of awardees that enjoy basic funding at present. Therefore the squeeze should be proportional. Maybe the bench jockeys should be limited to $100K or even $50K in this scenario? Doesn't seem so attractive if you consider taking the same proportional hit, does it?

22 responses so far

« Newer posts Older posts »