Archive for the 'NIH' category

NIA paylines and anti-ESI bias of review

Apr 20 2016 Published by under NIH, NIH funding

MillerLab noted on the twitters that the NIA has released it's new paylines for FY2016. If your grant proposal scores within the 9%ile zone, congrats! Unless you happen to be an Early Stage Investigator in which case you only have to score within the top 19% of applications, woot!

I was just discussing the continuing nature of the ESI bias in a comment exchange with Ferric Fang on another thread. He thinks

The problem that new investigators have in obtaining funding is not necessarily a result of bias but rather that it is more challenging for new investigators to write applications that are competitive with those of established investigators because as newcomers, they have less data and fewer accomplishments to cite.

and I disagree, viewing this as assuredly a bias in review. The push to equalize success rates of ESI applicants with those of established investigators (generational screw-job that it is) started back in 2007 with prior NIH Director Elias Zerhouni. The mechanism to accomplish this goal was, and continues to be, naked quota based affirmative action. NIH will fund ESI applications out of the order of review until they reach approximately the same success percentages as is enjoyed by the established investigator applications. Some ICs are able to game this out predictively by using different paylines- the percentile ranks within which almost all grants will be funded.

NIA-fundingpolicyAs mentioned, NIA has to use a 19%ile cutoff for ESI applications to equal a 9%ile cutoff for established investigator applications. This got me thinking about the origin of the ESI policies in 2007 and the ensuing trends. Luckily, the NIA publishes its funding policy on the website here. The formal ESI policy at NIA apparently didn't kick in until 2009, from what I can tell. What I am graphing here are the paylines used by NIA by fiscal year to select Exp(erienced), ESI and New Investigator (NI) applications for funding.

It's pretty obvious that the review bias against ESI applications continues essentially unabated*. All the talk about "eating our seed corn", the hand wringing about a lost generation, the clear signal that NIH wanted to fund the noobs at equivalent rates as the older folks....all fell on deaf ears as far as the reviewers are concerned. The quotas for the ESI affirmative action are still needed to accomplish the goal of equalizing success rates.

I find this interesting.

__
*Zerhouni noted right away [PDF] that study sections were fighting back against the affirmative action policy for ESI applications.

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

Note: It is probably only a coincidence that CSR reduced the number of first time reviewers in FY2014, FY2015 relative to the three prior FYs.

16 responses so far

On removing grant deadlines

Apr 20 2016 Published by under NIH, NIH Careerism, NIH funding

Eric Hand reported in Science that one NSF pilot program found that allowing for any-time submission reduced applications numbers.

Assistant Director for Geosciences Roger Wakimoto revealed the preliminary results from a pilot program that got rid of grant proposal deadlines in favor of an anytime submission. The numbers were staggering. Across four grant programs, proposals dropped by 59% after deadlines were eliminated.

I have been bombarded with links to this article/finding and queries as to what I think.

Pretty much nothing.

I do know that NIH has been increasingly liberal with allowing past-deadline submissions from PIs who have served on study section. So there is probably a data source to draw upon inside CSR if they care to examine it.

I do not know if this would do anything similar if applied to the NIH.

The NSF pilot was for

geobiology and low-temperature geochemistry, geomorphology and land-use dynamics, hydrological sciences, and sedimentary geology and paleobiology.

According to the article these are fields in which

"many scientists do field work, having no deadline makes it easier for collaborators to schedule time when they can work on a proposal".

This field work bit is not generally true of the NIH extramural community. I think it obvious that continual-submission helps to schedule time but I would note that it also eliminates a stick for the more proactive members of a collaboration to beat the slaggards into line. As a guy who hits his deadlines for grant submission, it's probably in my interest to further lower the encouragements the lower-energy folks require.

According to a geologist familiar with reviewing these grants

The switch is “going to filter for the most highly motivated people, and the ideas for which you feel the most passion,” he predicts. When he sits on merit review panels, he finds that he can usually reject half of the proposals right away as being hasty or ill-considered. “My hope is that this has taken off the bottom 50%,” he says. “Those are the ones you read and say, ‘Did they have their heart in this?’”

Personally I see very few NIH grant proposals that appear to me to be "hasty or ill-considered" or cause me to doubt the PI has her heart in it. And you know how I feel about the proposition that the RealProblem with NIH grant success hinges on whether or not PIs refine and hone and polish their applications into some shining gem of a document. Applications are down so therefore success rates go up is the only thing we need to take away from this pilot, if you ask me. Any method by which you could decrease NIH applications would likewise seem to improve success rates.

Would it work for NIH types? I tend to doubt it. That program at NSF started with only two submission rounds per year. NIH has three rounds for funding per year, but this results from a multitude of deadlines including new R01, new R21/R03, two more for the revised apps, special ones for AIDS-related, RFAs and assorted other mechanisms. As I mentioned above, if you review for the NIH (including Advisory Council service) you get an extra extension to submit for a given decision round.

The pressure for most of us to hit any specific NIH deadline during the year is, I would argue, much lower at baseline. So if the theory is that NSF types were pressured to submit junky applications because their next opportunity was so far away....this doesn't apply to NIH folks.

6 responses so far

NIH Grant Lottery

Apr 18 2016 Published by under Fixing the NIH, NIH

Fang and Casadevall have a new piece up that advocates turning NIH grant selection into a modified lottery. There is a lot of the usual dissection of the flaws of the NIH grant funding system here, dressed up as "the Case for" but really they don't actually make any specific or compelling argument, beyond "it's broken, here's our RealSolution".

we instead suggest a two-stage system in which (i) meritorious applications are identified by peer review and (ii) funding decisions are made on the basis of a computer-generated lottery . The size of the meritorious pool could be adjusted according to the payline. For example, if the payline is 10%, then the size of the meritorious pool might be expected to include the top 20 to 30% of applications identified by peer review.

They envision eliminating the face to face discussion to arrive at the qualified pool of applications:

Critiques would be issued only for grants that are considered nonmeritorious, eliminating the need for face-to-face study section meetings to argue over rankings,

Whoa back up. Under current NIH review, critiques are not a result of the face-to-face meeting. This is not the "need" for meeting to discuss the applications. They are misguided in a very severe and fundamental way about this. Discussion serves, ideally, to calibrate individual review, to catch errors, to harmonize disparate opinions, to refine the scoring....but in the majority of cases the written critiques are not changed a whole lot by the process and the resume of the discussion is a minor outcome.

Still, this is a minor point of my concern with their argument.

Let us turn to the juxtaposition of

New investigators could compete in a separate lottery with a higher payline to ensure that a specific portion of funding is dedicated to this group or could be given increased representation in the regular lottery to improve their chances of funding.

with

we emphasize that the primary advantage of a modified lottery would be to make the system fairer by eliminating sources of bias. The proposed system should improve research workforce diversity, as any female or underrepresented minority applicant who submits a meritorious application will have an equal chance of being awarded funding.

Huh? If this lottery is going to magically eliminate bias against female or URM applicants, why is it going to fail to eliminate bias against new investigators? I smell a disingenuous appeal to fairness for the traditionally disadvantaged as a cynical ploy to get people on board with their lottery plan. The comment about new investigators shows that they know full well it will not actually address review bias.

Their plan uses a cutoff. 20%, 30%...something. No matter what that cutoff line is, reviewers will know something about where it lies. And they will review/score grants accordingly. Just Zerhouni noted that when news of special ESI paylines got around, study sections immediately started giving ESI applications even worse scores. If there is a bias today that pushes new investigator, woman or URM PI's applications outside of the funding, there will be a bias tomorrow that keeps them disproportionately outside of the Fang/Casadevall lottery pool.

There is a part of their plan that I am really unclear on and it is critical to the intended outcome.

Applications that are not chosen would become eligible for the next drawing in 4 months, but individual researchers would be permitted to enter only one application per drawing, which would reduce the need to revise currently meritorious applications that are not funded and free scientists to do more research instead of rewriting grant applications.

This sounds suspiciously similar to a plan that I advanced some time ago. This post from 2008 was mostly responding to the revision-queuing behavior of study sections.

So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding.

The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1/15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review.

What I am unclear on in the Fang/Casadevall proposal is the limit to one application "per drawing". Is this per council round per IC? Per study section per Council round per IC? NIH-wide? Would the PI be able to stack up potentially-meritorious apps that go unfunded so that the get considered in series across many successive rounds of lotteries?

These questions address their underlying assumption that a lottery is "fair". It boils down to the question of whether everyone is equally able to buy the same number of lottery tickets.

The authors also have to let in quite reasonable exceptions:

Furthermore, we note that program officers could still use selective pay mechanisms to fund individuals who consistently make the lottery but fail to receive funding or in the unlikely instance that important fields become underfunded due to the vagaries of luck.

So how is this any different from what we have now? Program Officers are already trusted to right the wrongs of the tyranny of peer review. Arguing for this lottery system implies that you think that PO flexibility on exception funding is either insufficient or part of the problem. So why let it back into the scheme?

Next, the authors stumble with a naked assertion

The proposed system would treat new and competing renewal applications in the same manner. Historically, competing applications have enjoyed higher success rates than new applications, for reasons including that these applications are from established investigators with a track record of productivity. However, we find no compelling reason to justify supporting established programs over new programs.

that is highly personal. I find many compelling reasons to justify supporting established programs. And many compelling reasons not to do so preferentially. And many compelling reasons to demand a higher standard, or to ban them entirely. I suspect many of the participants in the NIH system also favor one or the other of the different viewpoints on this issue. What I find to be unconvincing is nakedly asserting this "we find no compelling reason" as if there is not any reasonable discussion space on the issue. There most assuredly is.

Finally, the authors appeal to a historical example with is laughably bad for their argument:

we note that lotteries are already used by society to make difficult decisions. Historically, a lottery was used in the draft for service in the armed forces...If lotteries could be used to select those who served in Vietnam, they can certainly be used to choose proposals for funding.

As anyone who pays even the slightest attention realizes, the Vietnam era selective service lottery in the US was hugely biased and subverted by the better-off and more-powerful to keep their offspring safe. A higher burden was borne by the children of the lower classes, the unconnected and, as if we need to say it, ethnic minorities. Referring to this example may not be the best argument for your case, guys.

45 responses so far

Bias at work

A piece in Vox summarizes a study from Nextions showing that lawyers are more critical of a brief written by an African-American. 

I immediately thought of scientific manuscript review and the not-unusual request to have a revision "thoroughly edited by a native English speaker". My confirmation bias suggests that this is way more common when the first author has an apparently Asian surname.

It would be interesting to see a similar balanced test for scientific writing and review, wouldn't it?

My second thought was.... Ginther. Is this not another one of the thousand cuts contributing to African-American PIs' lower success rates and need to revise the proposal extra times? Seems as though it might be. 

22 responses so far

Thought of the Day

Apr 09 2016 Published by under Fixing the NIH, NIH, NIH Careerism

"Uppity" is a fascinating concept when it comes to NIH Grant award. 

We know the sentiment applies to newer and younger investigators. I've heard countless review and Program Officer comments which amount to "let's not get too big for your britches, young un!" in my day. 

I wonder how much of the Ginther effect is related to sentiments similar to "damn uppity [insert subvocalization]"? 

17 responses so far

MIRA Moaners

Mar 31 2016 Published by under Fixing the NIH, NIH, NIH Careerism

Jocelyn Kaiser reports that some people who applied for MIRA person-not-project support from NIGMS are now complaining. 

I have no* comment.

____
*printable

31 responses so far

Reviewer mindset 

Mar 22 2016 Published by under NIH Careerism, Peer Review

I was just observing that I'd far rather my grants were reviewed by someone who had just received a new grant (or fundable score) than someone who had been denied a few times recently. 

It strikes me that this may not be universal logic.

Thoughts? 

 Is the disgruntled-applicant reviewer going to be sympathetic? Or will he do unto you as he has been done to?

Will the recently-awarded reviewer be in a generous mood? Or will she pull up the ladder? 

25 responses so far

How you know NIH officialdom is not being honest with you

Mar 10 2016 Published by under Anger, Fixing the NIH

Continue Reading »

18 responses so far

It's the pig-dog field scientists that are the problem

Mar 10 2016 Published by under Anger, Fixing the NIH, NIH, NIH Careerism

But clearly the laboratory based male scientists would never harass their female subordinates.

Field science is bad.

Lab science is good.

This is what the head of the Office of Extramural Research at the NIH seems to think.

8 responses so far

Never Ever Trust a Dec 1 NIH Grant Start Date: The Sickening

Mar 09 2016 Published by under NIH, NIH Budgets and Economics, NIH Careerism

As I noted in a prior post, the Cycle I NIH Grant awards (submitted in Feb-Mar, Reviewed Jun-July, Council Aug) with a first possible funding date of December 1 hardly ever are funded on time. This is due to Congress never passing a budget for the Fiscal Year that starts in October on time. The Congress sometimes goes into a stop-gap measure, like Continuing Resolution, which theoretically permits Federal agencies to spend along the parameters of the past year's budget. I find that NIH ICs of my greatest interest are highly conservative and never* fund new grants in December. The ICs that I follow almost inevitably wait until late Jan when Congress returns from their winter recess to see if they will do something more permanent.

New Cycle I grants then start trickling out in Feb, again, typically.

This year one of my favorite ICs, namely NIDA, has only just issued new Cycle I grants** this week, they hit RePORTER today.

March friggin 9th.

Six new R01 awards. Three K01, three K99s, one R15, one "planning grant" and three SBIR.

Even this is just a trickle, compared to what they should be funding for one of their major Cycles. I anticipate there will be a lot more coming out over the next couple of weeks so that they can (hopefully?) clear the decks for the Cycle II awards that are supposed to fund April 1.

I pity all those poor PIs out their waiting, just waiting, for their awards to fund. I cannot imagine why NIDA chooses to do this instead of at least trickling out the best score awards and the stuff they KNOW they are going to fund, way back in December***.
__
*Statistically undifferentiable from never

**You can tell by clicking on the individual awards and you'll see that they (R01s anyway) end Nov 30 or Dec 31 for the initial round of funding. These are Cycle I, not upjumped Cycle II.

***Some ICs do tend to fund a few new awards in December, no matter what the status of Congress' activity on a budget.

7 responses so far

« Newer posts Older posts »