Thought of the Day

Feb 11 2015 Published by under Grant Review, Grantsmanship, NIH Careerism

I hate when I review grant proposals that are good, but clearly have been made pedestrian and conservative through the school of hard knocks. There is so much awesome that could be done by these people. It is so clear to me what the really high impact version of this grant should look like. (Not having any illusions about my own unique brilliance, I assume they could see it too. )

But the review realities batter PIs down into a defensive crouch, worried that if they step too far past their Preliminary Data or established expertise they will get crushed.

Because, of course, they would get crushed.

Sometimes I wish I were the Boss of Science more than other times.

28 responses so far

  • dr.potnia.theron says:

    It is a sign of the decay of civilization when being safe is more important (to getting funded) than being clever, or good, or interesting, or bright or being able to move the field.

    Everyone (reviewers, PO's, etc's) gives huge lip service to innovation and and transformative science. But in the sad reality (see previous arguments about funding Greybeards), people who do think that way can't get funded for it, and people can't think that way do get funded.

    Fuck, I'm depressed now.

  • dr_mho says:

    That's an interesting point that, as a relatively junior (4 years) PI, I have gone back and forth about. Most of the Foundation Awards that are common for juniors to apply for require much less preliminary data and want fairly grandiose predictions. In contrast, the NIH is much more conservative. Learning to balance one's reach against the funding source is certainly a steep learning curve in my experience.

  • Pinko Punko says:

    Yes. The review process gets into heads and instead of injecting reality coupled to vision there is this artificial structure imposed. I think that strong SROs and strong study section chairs could set the tone here in terms of reviewing and pushing back against stock critiques is one way to throw off this yoke. Again, reviewers might not feel like they need to go to tenth tiebreaker on grants if paylines were better. It introduces a pernicious mentality.

  • Morgan Price says:

    DrugMonkey -- if you were science dictator, what changes would you make?

  • DrugMonkey says:

    I've been blogging eight years this week dude. Start reading....

  • DrugMonkey says:

    "Going to the tenth tiebreaker" is an excellent way to think about why review gets mired in the minutia and Stock Criticisms. Thanks for coining that.

  • Grumble says:

    "I hate when I review grant proposals that are good, but clearly have been made pedestrian and conservative through the school of hard knocks. There is so much awesome that could be done by these people. "

    What you are saying is that the proposal sucks because the system sucks, and you pretty much know that the scientist submitting the proposal DOESN'T suck. As a reviewer, you face the same conundrum as that raised in the other thread: do you give a crappy application from a stellar scientist a pass? No, because "fairness", or Yes because you're convinced that this scientist will do better science than anyone else represented in your stack of grants?

  • drugmonkey says:

    "no, because fairness" and "Yes because you're convinced that this scientist will do better science than anyone else represented in your stack of grants" are essentially the same thing. Grant review under the NIH system explicitly pits proposals against the other ones that happen to be under review in that round/study section. There is a little bit of tension with regard to their stupid callibration charts that attempt some sort of universal standard but the facts are simple. Program expects reviewers to advise them on grant quality to make funding decisions. They have a set of applications under consideration at each Council round and at best try to even things out across an entire Fiscal Year. The money doesn't roll over into the next FY and I see very little in the way of, say, funding nothing from Cycle I because it all sucked compared with the Cycle III apps.

    Correspondingly, any reviewer should only be telling them how the present application is likely to stack up within this round and certainly not beyond the current Fiscal Year. And since the only comparisons you have to go on* are the applications submitted this round, that is how you should be reviewing.

    *true, there are some other considerations for an appointed member of a standing study section that modify this orthodoxy. I say they shouldn't but it is an argument to be had.

  • Pinko Punko says:

    But how to deal with the fact that percentiles come from last three meetings- I think that if an application is not as good you can't just give it a 2 because of a bad batch- there are always previously scored grants in the mix or setting the bar such that the next section gets an accurate percentile for grants that maybe are viewed as better.

    I really get upset that people in the room might reflexively go to a stock critique, yet in their daily lives they understand how science really works. Many or most proposals could be better with more preliminary data, but there is a bar when experiments become worth it and justified beyond whatever level prelim data are there. The prelim data argument is part of the churning in that applications that have bounced are likely to just keep coming back with more (if those labs still have funding), and new apps will then seem increasingly data light in comparison. I think of some demands for preliminary data as unfunded mandates. You want to know the answer to that, then maybe that justifies some funding. This is another reason why I favor smaller grants designed for supporting generation of those critical data- otherwise study sections are asking for essential components that maybe only some mythical funding entity is paying for. I will always support judicious and critical application of preliminary data, but an understanding of where the line is might enable review support for ideas beyond dotting "i" and crossing "t".

  • Busy says:

    Daring papers get rejected, incremental, safe papers get accepted. This is well documented and matched by my experience. My most cited, most daring papers have a higher rejection rate than the next tier of incremental meh results.

  • Grumble says:

    "any reviewer should only be telling them how the present application is likely to stack up within this round and certainly not beyond the current Fiscal Year. And since the only comparisons you have to go on* are the applications submitted this round, that is how you should be reviewing."

    That's fine, but it doesn't help answer the question, "how do I score a sucky application from an awesome scientist?" That awesome scientist could be one of the ones you think will do the best, most impactful work of all the PIs with applications in this round, no matter what is in her application. So I'd be tempted to give her "Investigator" score a lot of weight and forgive the crappy application. But I've seen huge arguments about this sort of thing at study section.

  • DrugMonkey says:

    Why should that person get the benefit of the doubt? Maybe a hungry Young Gun would be the better bet.

  • DrugMonkey says:

    Point being, that "I know who will do good work, the application is meaningless" *is not the system* and you are being asked to review the document and the project. Not the history of that investigator in an HHMI style system.

  • Davis Sharp says:

    If you state why you're giving BSD's grant an overall impact of 2 when the other two reviewers are giving the grant a 6, then the rest of the panel can decide whom they agree with. That's allowable right? The overall impact score is not an average of the 5. But you risk your own reputation as a reviewer (of course, that may be moot as the SRO will not ask you to review again).

    OTOH, a grant with those scores is likely to be triaged and if you rescue it because BSD, then you will look like a clown.

    Score a sucky grant as a sucky grant.

  • Grumble says:

    "I know who will do good work, the application is meaningless" *is not the system* and you are being asked to review the document and the project. Not the history of that investigator"

    Sure, it's the system. The guidelines for the Investigator score specifically state, "If [the PIs are] established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)?" As a reviewer, I get to use my own judgment as to whether to pay more attention to this criteria vs the other ones in my final score.

    And Davis Sharp, I'm not even talking about BSDs. I'm talking about scientists whose work I very much respect and who consistently put out solid and/or provocative stuff. Not all of them are BSDs - in fact, most of them aren't. The few times I've seen reviewers obstinately insist on giving these sorts of people good overall scores despite grants that could have been better, the study section doesn't laugh at them. Most of them respect these sorts of applicants a lot and are pained by having to decide within the wide range of scores that the three reviewers usually end up with. Not everyone - there's usually an argument, typically continuing past the meeting and into drinks...

  • drugmonkey says:

    I certainly agree that panel members agonize over these scoring issues more than may be apparent to the applicants.

  • Comradde PhysioProffe says:

    "As a reviewer, I get to use my own judgment as to whether to pay more attention to this criteria vs the other ones in my final score."

    Yes, you do get to use your own judgment even if it violates the norms of the study section. But if you are an ad hoc, you won't be asked to review again, and if you are a regular member, your judgments will be ignored by the rest of the panel. Unless you are persuasive enough to move the needle on the norms of the study section. And don't forget, at the end of the day, only 10-15 percent of the grants in front of you are going to be funded, but probably 30-40 percent "fairly deserve" awesome scores for one reason or another.

    If you give awesome scores to all of those applications, you may as well stay home, because you are wasting your time. To see why, consider this extreme example: if a study section gives 1s to 20 percent of its applications, then the best possible score any grant can get in that study section is 20%ile, which is outside the payline.

    So as a conscientious study section member, you have no choice but to distinguish between the top five percent, top ten percent, top fifteen percent, top twenty percent, and so forth. The *only* question is how to do this. Since the top third of grants are generally proposals for awesome science by outstanding PIs, this is why there is a reliance on "grantsmanship" and other superficially "unfair" criteria. But it would truly be unfair *not* to use these criteria.

    Anyone who has served on a study section even once in the last decade would understand all of this and refrain from "it's an OUTRAGEOUS BREACH that I received an unfundable score on my BRILLIANT APPLICATION and a clear sign that the PEER REVIEW SYSTEM IS COMPLETELY BROKEN".

  • qaz says:

    The point brought up about bringing sucky grants by old PIs out of triage reminds me of something I've been noticing. I've noticed over the last few study sections that I've been on that this tendency ("this grant sucks, but we should give the PI credit" *) has been noticeably less likely to occur. In fact, the more I think about it, the more my anecdata suggests that this has been a transition from the old guard to the new. (Of course, that old guard only said it about their colleagues who were also old guard.)

    Maybe there's an element to which the process is changing and we're not giving people a pass just for being there anymore. So maybe the old guard is correctly perceiving that the rules are a little different now. **

    I'm curious to know how often people are still seeing that "give the old PI a pass on a really sucky grant" situation.

    * Note: I'm talking about really sucky grants that got passes, not good grants with a few flaws where people say "that person will handle that flaw - just remind them of it" or good grants lacking preliminary data where people say "that person will be able to get that done, I'm not worried about it". That still happens a lot. But that also happens for anyone with an established lab.

    ** As someone who got really burned by this over the years, my attitude is F--- them. It's time they played by the same rules as everyone else.

  • MoBio says:

    @QAZ

    The last time I witnessed this was many years ago--though no one on the committee argued that we should not give him/her an outstanding score. Just a collective dismay that the grant was less well written than it could have been.

    (I generally review something every cycle (sometimes multiple ad hocs, as a regular and so on) and have done so for more than 20 years so this is a lot of grants to remember this particular detail about).

    I remember everyone agreed that the proposed studies were amazing and that they would advance science.

    We were right.

  • Grumble says:

    @CPP:

    "But if you are an ad hoc, you won't be asked to review again, and if you are a regular member, your judgments will be ignored by the rest of the panel. "

    Wrong, and wrong. At least in my experience. Although, admittedly, I have not been as strident in my advocacy of certain grants as I would have liked to be - precisely because I knew, as you do, that the rest of the panel wouldn't buy it. I'm making an argument in extremis, though, to see where it goes.

    "But it would truly be unfair *not* to use these criteria."

    Why? Your argument is basically this: NIH sets up a set of rules by which everyone presumably plays, so it's not fair if someone plays outside the rules and gets away with it. But one of NIH's rules is that as a reviewer, I get to choose how to weight the various criteria. So actually, I am playing within the rules.

    But, but, but... what about the applicant with a good grant that doesn't get funded because I somehow convinced the panel that my favorite scientist is the bomb despite her shitty application? Well, what about him? Maybe he should have produced some better science in the past so that I would have been impressed by his record, and convinced that he deserves to be funded more than the other 99 applicants, rather than spending all his time dotting the i's and crossing the t's on his beautiful grants.

    I believe that where scientists are concerned, there is no better indication of future performance than past performance. I am not interested in beautiful grant applications. I'm interested in beautiful science. So the only logical alternative for me as a reviewer is to weight Investigator more than any other criterion.

  • E rook says:

    "I believe that where scientists are concerned, there is no better indication of future performance than past performance. I am not interested in beautiful grant applications. I'm interested in beautiful science. So the only logical alternative for me as a reviewer is to weight Investigator more than any other criterion."

    This is totally depressing. As a NI/ESI, on soft money, submitting every cycle, this makes me want to pack up and pursue my dream to be a barista in Banf, hitting first chair every morning. My "track record" is my past supervisors' track records. My prelim data come from my post doc and from scrounging together $ for side projects and small pilot grants. My pubs are half-finished preliminary projects (from those pilot grants) thrown out to meet the pub rate required to keep my job. How the hell am I to compete with pile of applicants with track records older than me? Why can't you just evaluate the GD document in front of you? I put A LOT of effort into making it crystal effing clear, compelling, at the 8th grade reading level, and covering every single contingency, not being underwhelming, not being overambitious .... And you tell me that the single biggest indicator for you is a standard that is impossible, no matter how perfect I put the app together, to meet?

  • Grumble says:

    No, that's not what I'm telling you. Personally, I don't evaluate early-stage investigators the same way I evaluate mid- and late-career PIs. I pay far less attention to "past results" precisely because where and what and how you publish as a student and post-doc don't necessarily indicate what sort of PI you will be - it's a totally different job.

    That, by the way, is also in accord with what the NIH wants, as indicated by their efforts to boost ESIs' scores. Implicit in their efforts is the idea that reviewers consider past performance when they review senior investigators' grants, which is why they tend to get better scores than junior investigators' grants. It's interesting that NIH combats this not by saying "don't use past performance as a criterion," but rather by doing things that work around it (e.g., using a different payline for ESIs).

  • toto says:

    I'm starting to warm up towards the idea of a semi-randomized grant review process.

    In the first stage, grant reviewers perform a large-net, in-or-out selection (say, "pick twice as many proposals as can actually be funded, end of story"). Then you attribute the actual grants by random selection within this subset.

    It should reduce all kinds of perverse incentives, including the "pedestrianization" mentioned in the post. It should also attenuate the effect of personal reviewer biases - to a point.

    Also, if the NIPS double-blind experiment is any guide, reviewing already has a pretty large random component anyway, so we would not be losing much signal.

    One problem is that it might encourage endless resubmission of the same grant in an attempt to "get lucky".

  • E rook says:

    Grumble- my thanks for your response.

  • Busy says:

    I would be very much in favor of a system where every reasonable proposal in year one gets a modest startup fund for two years. Thereafter the grants are based in previous success (last 5 to 8 ) with bigger and bigger amounts in play. Anyone could move up if suddenly they start hitting the jackpot repeatedly. This would free scientist from becoming grant proposal writers, which is all what the present system achieves.

  • Davis Sharp says:

    @Grumble: Critique the grant and use the "Additional Comments to Applicant" section to nicely state "You're an outstanding scientist, but this grant sucks." Future reviewers will see your comments (unless it's a one-time FOA) and how the PI responded. That, IMO, is fair.

    @Busy: The R21/R34 does what you describe, but (1) the progress from the two year, low-budget stage depends on program officer evaluation, (2) there will be a budget bottleneck two years down the road and, despite your best intentions, you will not have solved anything.

  • Busy says:

    Actually the system I propose is in use in other countries. It doesn't increase the grant amounts overall but it does reduce the amount of unnecessary grant smitting.

  • Busy says:

    p.s. this might sound hard to believe but in many countries scientists apply for a single grant renewal every few years, with their lab costs fully covered for that period.

Leave a Reply