Do you always get about the same score on your NIH grant?

Feb 11 2014 Published by under Grantsmanship, NIH, NIH Careerism, NIH funding

This question is mostly for the more experienced of the PItariat in my audience. I'm curious as to whether you see your grant scores as being very similar over the long haul?

That is, do you believe that a given PI and research program is going to be mostly a "X %ile" grant proposer? Do your good ones always seem to be right around 15%ile? Or for that matter in the same relative position vis a vis the presumed payline at a given time?

Or do you move around? Sometimes getting 1-2%ile, sometimes midway to the payline, sometimes at the payline, etc?

This latter describes my funded grants better. A lot of relative score (i.e., percentile) diversity.

It strikes me today that this very experience may be what reinforces much of my belief about the random nature of grant review. Naturally, I think I put up more or less the same strength of proposal each time. And naturally, I think each and every one should be funded.

So I wonder how many people experience more similarity in their scores, particularly for their funded or near-miss applications. Are you *always* coming in right at the payline? Or are you *always* at X %ile?

In a way this goes to the question of whether certain types of grant applications are under greater stress when the paylines tighten. The hypothesis being that perhaps a certain type of proposal is never going to do better than about 15%ile. So in times past, no problem, these would be funded right along with the 1%ile AMAZING proposals. But in the current environment, a change in payline makes certain types of grants struggle more.

44 responses so far

  • halcyon says:

    My first ever funded NIH grant as a junior PI got a perfect score. My most recent grant and the best I've ever written (IMHO) was not discussed.

  • Joe says:

    Mine are always different. Sometimes I'm right at the payline (wrong side or right side), sometimes I'm in the low single digits.

  • Anonymouse says:

    My applications at NIH get anything from triage to near-miss (my present funding comes from NSF and from a foreign peer-reviewed grant). Recently, I even got a near miss -A0 and a triaged -A1, after having duly addressed the reviewers' concerns!

    IMO the peer review at NIH is totally chaotic and subject to distributing academic welfare rather than to promoting innovation.

  • Wowchem says:

    It's everywhere though I go to different study sections and have two projects. One on a similar project I got triage then 1%. No way I can tell before hand so I just sub as much as possible

  • anonymous postdoc says:

    Ok but honest noob question: when is a triage a sign that an idea is just really stupid, and when is it just chance? Did a triage on my A0 K99 mean that I shat the bed, or was that the idiocy of the reviewers to not recognize my young and innovative genius? Because I had been operating under the assumption that I shat the bed, and my resubmission reflects that. But you folks alleging that scoring is just random...well, it gives one pause.

    I accept that I am entering a career where I have to ignore feedback and be self-reinforcing to the point of obstinacy. But how obstinate should one be? Should I have been raised by people/a culture that made me think that I deserve things simply because I want them? Because that ship has sailed.

  • The Other Dave says:

    Like others are saying, my scores are all over the place. I can't predict at all how things will turn out. Proposals that I think are great, and which colleagues have read and think are great, get triaged. Proposals full of half-ass shit thrown together just to get something submitted do fine. It's like backwards world most of the time. Even scores for the same proposal can be remarkably inconsistent. The same proposal gets ones from some people and fives from others, for the same criteria. Even shit that doesn't change, like me and the institution, get different scores from proposal to proposal. It's like they're rolling dice half the time.

    As a reviewer, I get it. Everything is relative. Your great proposal will suffer if the other couple in my stack happen to be better. Or your crappy proposal might seem fine if the other ones hugely suck. There is no external standard, especially among ad hoc reviewers.

    I have learned to just not obsess. It's a lottery ticket. And most importantly, I suck up to every potential reviewer that I meet. I am now mister 'everyone is awesome!', and it's helped better than any grantsmanship or real science.

  • The Other Dave says:

    @Anonymous: I have never ever gotten a significantly better score from following the advice of reviewers. Just fuckin close your eyes and throw the dart again.

  • Pinko Punko says:

    Scores are all over the place. I think that grants that can be great but perceived impact is just not as great as major powerhouse or super technical labs, these are the ones that are in the biggest trouble in the current climate.

  • Joanne says:

    This is kind of a silly question. I mean, what do you think the answer is going to be? No-one is ever going to get the same score every single time. Particularly when the percentile system is as borked as it currently is. A bunch of folks get a 2.0 (from the 1,2,3, or 2,2,2 "spread") and you all cluster round the same percentile, and then a bunch of folks get closer to a 3.0 because they got a 2,2,3 or a 2,3,3 and they all cluster round the next percentile. There's no way you're going to get the same percentile every time or anywhere near it.

  • Evelyn says:

    Grant writer perspective: younger PI's have more random scores, more established PI's have more uniform scores. I think a lot of the review is affected by name recognition. Also, significance matters a whole lot. I have seen some mediocre grants get close to funding even though their approach was terrible, just because the reviewers thought they were asking an important question.
    Resubmitting triaged grants is a bad idea. The reviewers see where the grant was scored before and I almost think they get blinders on when they see the ND on the first submission. I have had a handful of grants that went from triage-to-funding but none in the last two years.

  • lurker says:

    @ Pinko Punko. So, are you saying then that NI's/ESI's are largely fuckked (despite the supposed extra percentile pickups)? I can't get a straight answer from any of several PO's whether ESI's are reviewed in a separate pile or same pile as the rest of the "established" PI grants, but my guess is the latter, since my scores are completely skeet shot scattered and complete crap to try to "address" the reviewer's critiques.

  • Comradde PhysioProffe says:

    ESI R01s are definitely discussed in a separate pile at study section.

  • drugmonkey says:

    There's no way you're going to get the same percentile every time or anywhere near it.

    Then assertions* that "the NIH won't fund [basic, clinical, human, this, that, t'other] type of research" seem flawed. Such assertions seem to be saying that no matter how awesome a grant you write, there will be a floor on how good your score can be. I am exploring the idea, stipulating this is true, that perhaps a moving payline over time could cause a particular score floor to all of a sudden be out of the money.

    *Note, I think the assertion is stupid on the face of it an a few minutes with keyword searching on RePORTER can quickly dissolve any such claims.

  • dsks says:

    Well, mildly related... My first and only RO1 went in when I was a research a$$ prof (yeah, what was I thinking?) and got 30%ile with kudos for significance and the main criticisms being subtle experimental design issues. I got a real job at at a non-R1 institution and reassembled the grant as an AREA R15, thinking the assembled special emphasis R15 study section might dig the idea that I was giving good undergrads some experience at patch clamp electrophys (hell, one actually got some recordings!). It bought me a ticket to triagesville on significance 🙁

  • Dr Becca says:

    I've gotten 3 scores so far, all have been within 8 pts of each other (overall priority score), so I've been fairly consistent. The one with the best score was funded, but I daresay it was a generous "pickup" during council.

  • Comradde PhysioProffe says:

    My grants have ranged from very low single digit percentiles to triage. My worst score ever was for a drug discovery R01.

  • Joanne Williams says:

    > Such assertions seem to be saying that no matter how awesome a grant you write, there will be a floor on how good your score can be.

    Ah, I see. I think this is true - the variance is largely in the "worse" direction. I feel like the biggest problem is how your research aligns with NIHs study section structure. Unless your research squarely hits the center of a study sections interest, then you will struggle. To get funded now you have to excite the study section, and there is a lot of science that, while solid, is not exciting. And if your research is somewhat tangential to the main interests of the study section, you're also going to struggle to excite them. This is particularly the case these days, where you need three cheerleaders rather than one.

  • The Other Dave says:

    dsks: You wann take that stuff to NSF. Undergrad research accomplishments give them boners.

    I think, based on my observations/experience, that NSF (at least the BIO directorate) values undergrad research more than NIH mostly because many NSF reviewers are also from small undergrad research institutions. NIH doesn't tend to have reviewers like this because NIH reviewers are virtually all NIH-funded. NIH-funded people are often not in undergrad-oriented departments and/or they don't have time to fart around teaching unproductive undergrads how to collect data without breaking shit. NSF, in contrast, will take pretty much any warm body as a reviewer. Which even more increases the concentration of non-high power people on review panel. Because high-power people have better things to do.

  • Chris says:

    dsks: The NIH panel I'm on always sees some R15s in the pile, although we are not especially assembled to review R15s (we have a vast majority of R01s and a few random R21s each time). We make a special effort not to judge R15s with the R01s, the scoring criteria are different and we are looking specifically for that training aspect. BUT, the R15 still has to be answering some interesting question and making some contribution to the field. I know that this is HARD in the context of an undergraduate institution. And our R15s often get dinged for this ("this project will produce well-trained students that I'd like to take in my lab, but no one is going to care about the results"). But one round, an R15 got the highest score on the entire panel (1's all around) because it was a really innovative project that carved out its own niche, made some real contributions to the field (modest, of course, but real nonetheless) and had a fantastic training component. If you can strike this balance with an R15, you're golden (in our study section). The R21s, on the other hand, are pretty much screwed. I don't think I've seen one received well yet. Which is depressing. But that mechanism seems to be dying (if it's not already dead).

  • erickttr says:

    Chris - I spent a lot of time analyzing the R21 / R01 success rates from my potential study sections & IC. I came to your exact same conclusion. If only I could convince the oldies at my institution that ... yeah, writing six pages instead of twelve will free me up for working on your crap, but the R21 is still a waste of time. The risk + effort > payout; just doesn't make it worthwhile. I had a higher success rate during a weekend playing blackjack.

  • halcyon says:

    ouch - just got triaged again. Two in a row now. Maybe I am getting consistent after all...

  • anonymous postdoc says:

    But but but but but......

    A spambot has posted a comment on one of DM's posts from last year, (which I clicked the link to, to see what was being said, which is how I know it was a bot), and therein I reread the following comment from The Other Dave:

    "A few years ago I reviewed a proposal for NSF, and it was bad, but feeling unusually helpful I decided to lay out a detailed description of what I think would have been ideal. About a year later, I got the revised proposal to review. They had adopted *every single one of my suggestions*. It was if I had written the proposal. I gave it the highest score. How could I not? They got funded."
    http://scientopia.org/blogs/drugmonkey/2013/05/10/your-response-to-prior-review-of-your-nih-grant-application-in-one-easy-sentence/

    Contrasting this with his comment above, I remain confused. Is the general sense that revising a grant to appease the reviewers is pointless purely because we cannot guarantee the same people will review it? Is there any room for a contribution of confirmation bias, that we expect to receive unfair reviews more than fair ones, so remember the unfair ones more?

    I do appreciate TOD's reply to my question above, and am not trying to set up a "Gotcha!" Serendipity simply lead me to see that older comment while thinking of this thread.

  • Dr Becca says:

    AP, I think there's a little bit of a balancing act in knowing when to revise according to suggestions and when to leave things alone. When you go through your summary statement, you'll find that sometimes you say "OK, fair enough. That's a real flaw that I agree could be repaired" and so you do those things, and other times you'll be like "WTF are they talking about?" and in general you can chalk those up to idiosyncrasies of the reviewer, but talk to your colleagues about it. Even those, you can address sometimes without changing anything, just add a line "we have chosen to take x approach instead of y because ..."

    And to address your question above, a triaged K99 does not mean the project sucks. K99s can be triaged for bad training plans, a poor record of productivity as a post-doc (SO I HEAR), lots of things that have nothing to do with the science itself. My K99 was triaged and I just published my first paper on Aim 1.

  • The Other Dave says:

    Anonymous...

    If only more reviewers were like me!

  • imager says:

    My first R01 as new PI got a 15%, just on the borderline. I resubmitted it as an A1 to be safe if it would not make it - and the resubmission, answering to all (few) concerns of the prior submission wasn't discussed. The A0 ultimately got funded and since I am pretty convinced that randomness - and luck of the draw - is a large factor.

  • dsks says:

    TOD,
    "dsks: You wann take that stuff to NSF. Undergrad research accomplishments give them boners."

    Trying that route as we speak, so we'll see how it fares.

    Chris,
    Mine went to a special panel for Neurosci R15s, and I believe a good fraction of the reviewers may have been primarily funded through that mechanism. I'm trying a different project to a different institute this time around, so I we'll see what happens with that.

    Imager
    "My first R01 as new PI got a 15%, just on the borderline. I resubmitted it as an A1 to be safe if it would not make it - and the resubmission, answering to all (few) concerns of the prior submission wasn't discussed. The A0 ultimately got funded and since I am pretty convinced that randomness - and luck of the draw - is a large factor."

    These anecdotes are just killing me.

  • imager says:

    How about the scoring of a friend of mine (R01): A0 was at 32% (or in that range), A1 at 8% - not funded.

  • DrugMonkey says:

    Can you tell us which IC is at a payline south of 8%ile? That sounds brutal.

  • Ass(isstant) Prof says:

    RE: dsks and taking it to NSF

    I have one project that could go either way--NSF or NIH. It was originally a K01 with good reviews, but I think it suffered by a dubious institutional commitment. As an R01 it was triaged then discussed and not funded. The same project with NSF gets Excellent and Very Good ratings, yet languishes at 'medium priority.' The PO at NSF affirmed that reviews were great, but NIH 'has a big footprint in that [very broad] area' so we aren't that interested, even if it is a basic cellular mechanism.

    From what I hear from colleagues who are NSF panelists, pay lines are even tighter over there. Still, in my experience, the reviews are pretty insightful and well-thought out.

  • imager says:

    That was NCI. 7% and everything above only upon NCI discretion on case by case basis. It got a bit better now, its at 9% currently. This was 2012 or early 2013, don't remember exactly (wasn't my grant, was a co-PI as collaborator).

  • imager says:

    And it has also a lot to do with the quality of the PO and their interest in the topic.

  • […] A1 score, unfundable, after massive efforts to respond to A1 reviews.  (@drugmonkeyblog recently wrote about this phenomenon)  It’s deeply ironic to me that the “deal-breaker” criticism of each grant was neutered by […]

  • […] this appears to confirm much of what was discussed in the comments section of a recent post by FunkyDrugMonkey where priority scores seemed to be all over the place for most investigators […]

  • Susan says:

    My first R01 got about the same score as the A0 version of the last grant I ghost-wrote as a postdoc. Since that was funded on revision, I will hope I can be consistent. In other news, my first R01 was not triaged!

  • DrugMonkey says:

    Congratulations!

  • Anonymouse says:

    So, I guess we have a consensus: the review process is chaotic. This provokes a fundamental question: why do we need resubmissions at all? Why should we waste our time on the garbage reviewers write, and clog the NIH with regurgitated applications?

    1. Get rid of -A1 AND the triage at the same time. Every application needs to be discussed, to increase the chance of eliminating absurds from individual critiques and to provide the applicant some meaningful feedback. I can live with a negative, but meritorious opinion. But if my summary statements read like Alice in the Wonderland (example: viral entry inhibitors modulating an extracellular target. Major weakness? "The inhibitors may not be able to penetrate the cell membrane") then I am forced to resubmit ad nauseam.

    2. Create a meaningful appeals process.

    3. Perhaps, just perhaps, get rid of peer review altogether. Hire competent program directors and let them decide, instead of allowing the members of the club play funding politics within the club and turn the peer review process into a circus.

  • drugmonkey says:

    Every application needs to be discussed, to increase the chance of eliminating absurds from individual critiques and to provide the applicant some meaningful feedback.

    1) In point of fact the written critiques are rarely changed significantly after discussion. It happens but it is rare and mostly of a minor nature. So not sure how "meaningful feedback" would be helped by this.

    2) When all three assigned reviewers more or less agree, the panel discussion is not going to "eliminate absurds". It is only when there is significant disparity amongst the three that the discussion is really important and has a chance of making a meaningful change in the "absurd" critique. And on good panels this happens already. Significantly disparate scores trigger extra scrutiny and often get apps dragged up out of triage for discussion.

    Admittedly this has seemingly been happening less lately. I blame the usual, i.e., the shrinking payline in large part. There is no point in discussing something so far away from relevance to funding/not funding zone. The time is better spent on those circling the hot line. There is also a structural change which I object to...but it is going strong. Application are now discussed in the order of the preliminary scores. I think this has a quelling effect on discussion. In the old days, they were more or less randomly grouped by score so that those assigned to a single Program Officer were reviewed in close succession or by other procedural efficiencies. I liked that review order much better for the way it facilitated meaningful discussion across the entire meeting.

  • drugmonkey says:

    Create a meaningful appeals process.

    What is that supposed to fix? Everyone who thinks they got screwed is re-reviewed. And they end up in the same damn order anyway. except now the "absurd" comments have been rounded off so the applicant feels better, I guess?

    Hire competent program directors and let them decide,

    The propensity of Program Officers to be clubby with "their" established investigators would shock you, I surmise? The fact is, we need balance. We need a multipart system of evaluation to avoid any on subset of it from becoming too powerful. Peers and professional program staff each have their strengths and weaknesses. together, perhaps they minimize each others' weaknesses.

  • JohnM says:

    1999: hired
    2003: R01: 18% percentile; funded, just (study section #1)
    2008: R01 renewal #1, first try: 1.2% (study section #1)
    2012: F33: perfect score (study section #2)
    2012; Ro1 renewal #2 first try: unscored/triaged (study section #3)
    2013: Ro1 renewal #2 second try: unscored/triaged (study section #3)
    The project current has no NIH support.

  • Anonymouse says:

    "What is that [a viable appeal process] supposed to fix?"

    I will give you an example. I submit an application concerned with viral entry inhibitors, and the review panel expressess concerns that my compounds may have problems with... crossing the cell membrane (a bit of information for non-life scientists: viral entry inhibitors act outside the cell, so the concern is as intelligent as bashing an airplane project on the premise that it is not a submarine). This is not even Kafka. This is Lewis Carroll and some fucking Wonderland, by no means representing any even half-legitimate "difference of scientific opinions".

    Now back to your question what a meaningful appeal process is supposed to fix. As a result of the appeal, reviewers who manufacture such nonsense should be eliminated from the panels. Kicked out, blacklisted and pilloried on the NIH web page. Naturally, only if you are concerned about the integrity of the peer review process at NIH.

  • drugmonkey says:

    JohnM: and were you pursuing other R01 grants at the same time? Or depending only on renewing?

    Anonymouse: everyone has a story of ERRorZ of FACT on review. I find it rare that this is the difference between funding and not when all is said and done. I AM curious about your seeming assertion that there are bad reviewers that are uniquely responsible for these situations. For banning to work, they would have to be the sole source of FlAWeD ReVIEwz and never contribute any good ones.

  • Anonymouse says:

    1. Banning - bad and good deeds are not supposed to balance (at least not in science). If they did, having published 20 good papers would be an affirmative defense in case the 21st is fraudulent. The first fraudulent review should cause a warning, the second one a ban, blacklisting and pilloring.

    2. Uniquely responsible - no, I am not saying that ignorant/dishonest reviewers are uniquely responsible. The low funding levels are responsible too, and only combined with low ethical standards on the panels and indifference/timidity on the part of POs these factors create an explosive mix where the fraction of dishonest reviewers routinely make a difference between funding and not funding. Well, when the taxpayer funds are scarce, what is available should be distributed with a particular attention to the integrity of the process, IMO. So - kicking out, blacklisting and pilloring.

    3. "everyone has a story of ERRorZ of FACT on review" - so, IMO, the true "improving the peer review" at NIH should focus on eliminating these stories, rather than on the traditional antics about the criteria, how points are calculated etc.

  • jdmuuc says:

    I've only submitted an A0 and A1 for a K-award so far; score dropped 5 points.

    What's the read here? The established PIs/my mentors, who thought my resubmit was a big improvement, just shrug and say c'est la vie - sometimes this shit happens and it doesn't have much bearing on my grantsmanship. Me? I'm trying to swallow that pill but it's really disheartening to be staring at that new, lower score every time I check to see if the summary statement got posted.

  • La Rochefoucauld says:

    blinded reviews would fix many problems

Leave a Reply