Archive for the 'NIH' category

Entitled to a Grant: What is fair?

May 02 2016 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

I am genuinely curious as to how you people see this. Is there any particular difference between people arguing that that acquisition of the first major grant award should be protected versus multiple award and the people arguing that acquisition of the first and third concurrent awards should be on an equal footing?

If we agree that NIH (or NSF or CIHR or whatever) grants are competitively awarded, it follows that nobody is actually entitled to a grant. And as far as I am aware, all major funding agencies operate in a way that states and demonstrates the truth of this statement.

Specifically in the NIH system, it is possible for the NIH officials to choose not to fund a grant proposal that gets the best possible score and glowing reviews during peer review. Heck, this could happen repeatedly for approximately the same project and the NIH could still choose not to fund it.

Nobody is entitled to a grant from the NIH. Nobody.

It is also the case that the NIH works very hard to ensure a certain amount of equal representation in their awarded grants. By geography (State and Congressional district), by PI characteristics of sex and prior NIH PIness, by topic domain (see the 28 ICs) or subdomain (see Division, Branches of the ICs. also RFAs), etc.

Does a lean to prioritize the award of a grant to those with no other major NIH support (and we're not just talking the newcomers- plenty of well-experienced folks are getting special treatment because they have run out of other NIH grant support) have a justification?

Does the following graph, posted by Sally Rockey, the previous head of Extramural Research at the NIH make a difference?

This shows the percentage of all PIs in the NIH system for Fiscal Years 1986, 1998, 2004 (end of doubling) and 2009 who serve as PI on 1-8 Research Project Grants. In the latest data, 72.3% had only one R01 and 93% had 1 or 2 concurrent RPGs. There were 5.4% of the PIs that held 3 grants and 1.2% that held 4 grants. I just don't see where shifting the 7% of 3+ concurrent awards into the 1-2 grant population is going to budge the needle on the perceived grant chances of those without any major NIH award. Yes, obviously there will be some folks funded who would otherwise not have been. Obviously. But if this is put through in a systematic way*, the first thing the current 3+ grant holders are going to do is stop putting in modular grants and max out their allowable 2 at $499,999 direct costs. Maybe some will even get Program permission to breach the $500,000 DC / y threshold. So there won't be a direct shift of 7% of grants back into the 1-2 grant PI population.

There has been a small trend for PIs holding more grants concurrently from 1986 to the late naughties but this is undoubtedly down to the decreasing purchasing power of the modular-budget grant.

I"ve taken their table of yearly adjustments and used those to calculate the increase necessary to keep pace with inflation (black bars) and the decrement in purchasing power (red bars). The starting point was the 2001 fiscal year (and the BRDPI spreadsheet is older so the 2011 BRDPI adjustment is predicted, rather than actual). As you can see, a full modular $250,000 year in 2011 has 69% of the purchasing power of that same award in 2001.

Without that factor, I'd say the relative proportions of PIs holding 1, 2, 3 etc grants would be even more similar across time than it already is.

So I come back to my original question. What is fair? What policies should the NIH or any broad governmental funding body adopt when it comes to distributing the grant wealth across laboratories? On what basis should they do this?

Fairness? Diversity of grant effort? PR/optics?

*and let us face it, it is hugely unlikely that the entire NIH will put through a 2-grant cap without any exceptions. Even with considerable force and authority behind it, any such initiative is likely to be only partially successful in preventing 3+ grant PIs.

DISCLAIMER: As always, I am an interested party in these discussions. My lab's grant fortunes are affected by broad sweeping policies that the NIH might choose to adopt or fail to adopt. You should always read my comments about the NIH grant game with this in mind.

95 responses so far

Open Grantsmanship

Apr 27 2016 Published by under Careerism, NIH, NIH Careerism

The Ramirez Group is practicing open grantsmanship by posting "R01 Style" documents on a website. This is certainly a courageous move and one that is unusual for scientists. It is not so long ago that mid-to-senior level Principal Investigator types were absolutely dismayed to learn that CRISP, the forerunner to RePORTER, would hand over their funded grants' abstract to anyone who wished to see it.

There are a number of interesting things here to consider. On the face of it, this responds to a plea that I've heard now and again for real actual sample grant materials. Those who are less well-surrounded by grant-writing types can obviously benefit from seeing how the rather dry instructions from NIH translate into actual working documents. Good stuff.

As we move through certain changes put in place by the NIH, even the well experienced folks can benefit from seeing how one person chooses to deal with the Authentication of Resources requirement or some such. Budgeting may be helpful for others. Ditto the Vertebrate Animals section.

There is the chance that this will work as Open Pre-Submission Peer Review for the Ramirez group as well. For example, I might observe that referring to Santa Cruz as the authoritative proof of authentic antibodies may not have the desired effect in all reviewers. This might then allow them to take a different approach to this section of the grant, avoiding the dangers of a reviewer that "heard SC antibodies are crap".

But there are also drawbacks to this type of Open Science. In this case I might note that posting a Vertebrate Animals statement (or certain types of research protocol description) is just begging the AR wackaloons to make your life hell.

But there is another issue here that I think the Readers of this blog might want to dig into.

Priority claiming.

As I am wont to observe, the chances are high in the empirical sciences that if you have a good idea, someone else has had it as well. And if the ideas are good enough to shape into a grant proposal, someone else might think these thoughts too. And if the resulting application is a plan that will be competitive, well, it will have been shaped into a certain format space by the acquired wisdom that is poured into a grant proposal. So again, you are likely to have company.

Finally, we all know that the current NIH game means that each PI is submitting a LOT of proposals for research to the NIH.

All of this means that it is likely that if you have proposed a 5 year plan of research to the NIH someone else has already, or will soon, propose something that is a lot like it.

This is known.

It is also known that your chances of bringing your ideas to fruition (published papers) are a lot higher if you have grant support than if you do not. The other way to say this is that if you do not happen to get funded for this grant application, the chances that someone else will publish papers related to your shared ideas is higher.

In the broader sense this means that if you do not get the grant, the record will be less likely to credit you for having those ideas and brilliant insights that were key to the proposal.

So what to do? Well, you could always write Medical Hypotheses and review papers, sure. But these can be imprecise. They describe general hypotheses and predictions but....that's about all.

It would be of more credit to you to lay out the way that you would actually test those hypotheses, is it not? In all of the brilliant experimental design elegance, key controls and fancy scientific approaches that are respected after the fact as amazing work. Maybe even with a little bit of preliminary evidence that you are on the right track, even if that evidence is far too limited to ever be published.

Enter the Open Grantsmanship ploy.

It is genius.

For two reasons.

First, of course, is pure priority claiming. If someone else gets "your" grant and publishes papers, you get to go around whining that you had the idea first. Sure, many people do this but you will have evidence.

Second, there is the subtle attempt to poison the waters for those other competitors' applications. If you can get enough people in your subfield reading your Open Grant proposals then just maaaaaybe someone on a grant panel will remember this. And when a competing proposal is under review just maaaaaaybe they will say "hey, didn't Ramirez Group propose this? maybe it isn't so unique.". Or maybe they will be predisposed to see that your approach is better and downgrade the proposal that is actually under review* accordingly. Perhaps your thin skin of preliminary data will be helpful in making that other proposal look bad. Etc.

*oh, it happens. I have had review comments on my proposals that seemed weird until I became aware of other grant proposals that I know for certain sure couldn't have been in the same round of review. It becomes clear in some cases that "why didn't you do things this way" comments are because that other proposal did indeed do things that way.

21 responses so far

NIA paylines and anti-ESI bias of review

Apr 20 2016 Published by under NIH, NIH funding

MillerLab noted on the twitters that the NIA has released it's new paylines for FY2016. If your grant proposal scores within the 9%ile zone, congrats! Unless you happen to be an Early Stage Investigator in which case you only have to score within the top 19% of applications, woot!

I was just discussing the continuing nature of the ESI bias in a comment exchange with Ferric Fang on another thread. He thinks

The problem that new investigators have in obtaining funding is not necessarily a result of bias but rather that it is more challenging for new investigators to write applications that are competitive with those of established investigators because as newcomers, they have less data and fewer accomplishments to cite.

and I disagree, viewing this as assuredly a bias in review. The push to equalize success rates of ESI applicants with those of established investigators (generational screw-job that it is) started back in 2007 with prior NIH Director Elias Zerhouni. The mechanism to accomplish this goal was, and continues to be, naked quota based affirmative action. NIH will fund ESI applications out of the order of review until they reach approximately the same success percentages as is enjoyed by the established investigator applications. Some ICs are able to game this out predictively by using different paylines- the percentile ranks within which almost all grants will be funded.

NIA-fundingpolicyAs mentioned, NIA has to use a 19%ile cutoff for ESI applications to equal a 9%ile cutoff for established investigator applications. This got me thinking about the origin of the ESI policies in 2007 and the ensuing trends. Luckily, the NIA publishes its funding policy on the website here. The formal ESI policy at NIA apparently didn't kick in until 2009, from what I can tell. What I am graphing here are the paylines used by NIA by fiscal year to select Exp(erienced), ESI and New Investigator (NI) applications for funding.

It's pretty obvious that the review bias against ESI applications continues essentially unabated*. All the talk about "eating our seed corn", the hand wringing about a lost generation, the clear signal that NIH wanted to fund the noobs at equivalent rates as the older folks....all fell on deaf ears as far as the reviewers are concerned. The quotas for the ESI affirmative action are still needed to accomplish the goal of equalizing success rates.

I find this interesting.

*Zerhouni noted right away [PDF] that study sections were fighting back against the affirmative action policy for ESI applications.

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

Note: It is probably only a coincidence that CSR reduced the number of first time reviewers in FY2014, FY2015 relative to the three prior FYs.

16 responses so far

On removing grant deadlines

Apr 20 2016 Published by under NIH, NIH Careerism, NIH funding

Eric Hand reported in Science that one NSF pilot program found that allowing for any-time submission reduced applications numbers.

Assistant Director for Geosciences Roger Wakimoto revealed the preliminary results from a pilot program that got rid of grant proposal deadlines in favor of an anytime submission. The numbers were staggering. Across four grant programs, proposals dropped by 59% after deadlines were eliminated.

I have been bombarded with links to this article/finding and queries as to what I think.

Pretty much nothing.

I do know that NIH has been increasingly liberal with allowing past-deadline submissions from PIs who have served on study section. So there is probably a data source to draw upon inside CSR if they care to examine it.

I do not know if this would do anything similar if applied to the NIH.

The NSF pilot was for

geobiology and low-temperature geochemistry, geomorphology and land-use dynamics, hydrological sciences, and sedimentary geology and paleobiology.

According to the article these are fields in which

"many scientists do field work, having no deadline makes it easier for collaborators to schedule time when they can work on a proposal".

This field work bit is not generally true of the NIH extramural community. I think it obvious that continual-submission helps to schedule time but I would note that it also eliminates a stick for the more proactive members of a collaboration to beat the slaggards into line. As a guy who hits his deadlines for grant submission, it's probably in my interest to further lower the encouragements the lower-energy folks require.

According to a geologist familiar with reviewing these grants

The switch is “going to filter for the most highly motivated people, and the ideas for which you feel the most passion,” he predicts. When he sits on merit review panels, he finds that he can usually reject half of the proposals right away as being hasty or ill-considered. “My hope is that this has taken off the bottom 50%,” he says. “Those are the ones you read and say, ‘Did they have their heart in this?’”

Personally I see very few NIH grant proposals that appear to me to be "hasty or ill-considered" or cause me to doubt the PI has her heart in it. And you know how I feel about the proposition that the RealProblem with NIH grant success hinges on whether or not PIs refine and hone and polish their applications into some shining gem of a document. Applications are down so therefore success rates go up is the only thing we need to take away from this pilot, if you ask me. Any method by which you could decrease NIH applications would likewise seem to improve success rates.

Would it work for NIH types? I tend to doubt it. That program at NSF started with only two submission rounds per year. NIH has three rounds for funding per year, but this results from a multitude of deadlines including new R01, new R21/R03, two more for the revised apps, special ones for AIDS-related, RFAs and assorted other mechanisms. As I mentioned above, if you review for the NIH (including Advisory Council service) you get an extra extension to submit for a given decision round.

The pressure for most of us to hit any specific NIH deadline during the year is, I would argue, much lower at baseline. So if the theory is that NSF types were pressured to submit junky applications because their next opportunity was so far away....this doesn't apply to NIH folks.

6 responses so far

NIH Grant Lottery

Apr 18 2016 Published by under Fixing the NIH, NIH

Fang and Casadevall have a new piece up that advocates turning NIH grant selection into a modified lottery. There is a lot of the usual dissection of the flaws of the NIH grant funding system here, dressed up as "the Case for" but really they don't actually make any specific or compelling argument, beyond "it's broken, here's our RealSolution".

we instead suggest a two-stage system in which (i) meritorious applications are identified by peer review and (ii) funding decisions are made on the basis of a computer-generated lottery . The size of the meritorious pool could be adjusted according to the payline. For example, if the payline is 10%, then the size of the meritorious pool might be expected to include the top 20 to 30% of applications identified by peer review.

They envision eliminating the face to face discussion to arrive at the qualified pool of applications:

Critiques would be issued only for grants that are considered nonmeritorious, eliminating the need for face-to-face study section meetings to argue over rankings,

Whoa back up. Under current NIH review, critiques are not a result of the face-to-face meeting. This is not the "need" for meeting to discuss the applications. They are misguided in a very severe and fundamental way about this. Discussion serves, ideally, to calibrate individual review, to catch errors, to harmonize disparate opinions, to refine the scoring....but in the majority of cases the written critiques are not changed a whole lot by the process and the resume of the discussion is a minor outcome.

Still, this is a minor point of my concern with their argument.

Let us turn to the juxtaposition of

New investigators could compete in a separate lottery with a higher payline to ensure that a specific portion of funding is dedicated to this group or could be given increased representation in the regular lottery to improve their chances of funding.


we emphasize that the primary advantage of a modified lottery would be to make the system fairer by eliminating sources of bias. The proposed system should improve research workforce diversity, as any female or underrepresented minority applicant who submits a meritorious application will have an equal chance of being awarded funding.

Huh? If this lottery is going to magically eliminate bias against female or URM applicants, why is it going to fail to eliminate bias against new investigators? I smell a disingenuous appeal to fairness for the traditionally disadvantaged as a cynical ploy to get people on board with their lottery plan. The comment about new investigators shows that they know full well it will not actually address review bias.

Their plan uses a cutoff. 20%, 30%...something. No matter what that cutoff line is, reviewers will know something about where it lies. And they will review/score grants accordingly. Just Zerhouni noted that when news of special ESI paylines got around, study sections immediately started giving ESI applications even worse scores. If there is a bias today that pushes new investigator, woman or URM PI's applications outside of the funding, there will be a bias tomorrow that keeps them disproportionately outside of the Fang/Casadevall lottery pool.

There is a part of their plan that I am really unclear on and it is critical to the intended outcome.

Applications that are not chosen would become eligible for the next drawing in 4 months, but individual researchers would be permitted to enter only one application per drawing, which would reduce the need to revise currently meritorious applications that are not funded and free scientists to do more research instead of rewriting grant applications.

This sounds suspiciously similar to a plan that I advanced some time ago. This post from 2008 was mostly responding to the revision-queuing behavior of study sections.

So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding.

The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1/15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review.

What I am unclear on in the Fang/Casadevall proposal is the limit to one application "per drawing". Is this per council round per IC? Per study section per Council round per IC? NIH-wide? Would the PI be able to stack up potentially-meritorious apps that go unfunded so that the get considered in series across many successive rounds of lotteries?

These questions address their underlying assumption that a lottery is "fair". It boils down to the question of whether everyone is equally able to buy the same number of lottery tickets.

The authors also have to let in quite reasonable exceptions:

Furthermore, we note that program officers could still use selective pay mechanisms to fund individuals who consistently make the lottery but fail to receive funding or in the unlikely instance that important fields become underfunded due to the vagaries of luck.

So how is this any different from what we have now? Program Officers are already trusted to right the wrongs of the tyranny of peer review. Arguing for this lottery system implies that you think that PO flexibility on exception funding is either insufficient or part of the problem. So why let it back into the scheme?

Next, the authors stumble with a naked assertion

The proposed system would treat new and competing renewal applications in the same manner. Historically, competing applications have enjoyed higher success rates than new applications, for reasons including that these applications are from established investigators with a track record of productivity. However, we find no compelling reason to justify supporting established programs over new programs.

that is highly personal. I find many compelling reasons to justify supporting established programs. And many compelling reasons not to do so preferentially. And many compelling reasons to demand a higher standard, or to ban them entirely. I suspect many of the participants in the NIH system also favor one or the other of the different viewpoints on this issue. What I find to be unconvincing is nakedly asserting this "we find no compelling reason" as if there is not any reasonable discussion space on the issue. There most assuredly is.

Finally, the authors appeal to a historical example with is laughably bad for their argument:

we note that lotteries are already used by society to make difficult decisions. Historically, a lottery was used in the draft for service in the armed forces...If lotteries could be used to select those who served in Vietnam, they can certainly be used to choose proposals for funding.

As anyone who pays even the slightest attention realizes, the Vietnam era selective service lottery in the US was hugely biased and subverted by the better-off and more-powerful to keep their offspring safe. A higher burden was borne by the children of the lower classes, the unconnected and, as if we need to say it, ethnic minorities. Referring to this example may not be the best argument for your case, guys.

45 responses so far

Bias at work

A piece in Vox summarizes a study from Nextions showing that lawyers are more critical of a brief written by an African-American. 

I immediately thought of scientific manuscript review and the not-unusual request to have a revision "thoroughly edited by a native English speaker". My confirmation bias suggests that this is way more common when the first author has an apparently Asian surname.

It would be interesting to see a similar balanced test for scientific writing and review, wouldn't it?

My second thought was.... Ginther. Is this not another one of the thousand cuts contributing to African-American PIs' lower success rates and need to revise the proposal extra times? Seems as though it might be. 

22 responses so far

Thought of the Day

Apr 09 2016 Published by under Fixing the NIH, NIH, NIH Careerism

"Uppity" is a fascinating concept when it comes to NIH Grant award. 

We know the sentiment applies to newer and younger investigators. I've heard countless review and Program Officer comments which amount to "let's not get too big for your britches, young un!" in my day. 

I wonder how much of the Ginther effect is related to sentiments similar to "damn uppity [insert subvocalization]"? 

17 responses so far

MIRA Moaners

Mar 31 2016 Published by under Fixing the NIH, NIH, NIH Careerism

Jocelyn Kaiser reports that some people who applied for MIRA person-not-project support from NIGMS are now complaining. 

I have no* comment.


25 responses so far

Reviewer mindset 

Mar 22 2016 Published by under NIH Careerism, Peer Review

I was just observing that I'd far rather my grants were reviewed by someone who had just received a new grant (or fundable score) than someone who had been denied a few times recently. 

It strikes me that this may not be universal logic.


 Is the disgruntled-applicant reviewer going to be sympathetic? Or will he do unto you as he has been done to?

Will the recently-awarded reviewer be in a generous mood? Or will she pull up the ladder? 

25 responses so far

How you know NIH officialdom is not being honest with you

Mar 10 2016 Published by under Anger, Fixing the NIH

Continue Reading »

18 responses so far

Older posts »