NIH grant applications are not competing with the reviewers!

Aug 18 2015 Published by under NIH Careerism

So misguided. Understandable frustration...but misguided.

Think of it this way- do you dismiss Olympic judging of diving or figure skating because the judges can't do that themselves? What about the scoring of boxing?

Your competition is not the judge. It is the other participants in the event that stand between you and glory.

In NIH grant review, that means the other applications that have been submitted.

31 responses so far

  • Pinko Punko says:

    I agree, with some caveats.

    Recent experiences suggests increasing gap between BSD grants and everyone else. Seeing reduced effort in the "everyone else" pile of reviews. Reviewers can choose to engage or not. But hard to parse tea leaves when as we have discussed here many times, the reviewers say things that aren't the real drivers for their "meh", so you can get boilerplate language about how awesome the proposal is, then contradictory language in another section. Goal is to get them excited, but bar for excitement I think is warped by increasing emphasis on guaranteed immediate impact.

    The RFI is now closed, but NIH Strategic goals mentioning basic research and long-term impacts emerging after work is long completed seemed sort of laughable, given that there is no framework for peer review to engage with that sort of science.

  • Saban_lab says:

    international reviewers!!!!

  • lurker says:

    Apparently the European Research Council uses some reviewers in the US, and our own NSF use international reviewers (i.e. Europe). And then the expectations become even more warped, like either side not having any fucken clue how to rate the metrics and what it would mean. My anectdata: European competitor came up to me at a conference, said he reviewed my NSF grant, really thought it was great and said he gave it an Excellent rating. Well in NSF review scale, a grant needs to be Outstanding to be funded at these <10% paylines, because the other 11-30% of Excellent is just a whiff above "meh"! Thanks, NSF PO, for 1) picking a competitor who would have clearly benefited from seeing my ideas, so major COI, 2) picking a European w/o a real clue of the US rating system. WTF.

    No, I'm not blaming the judges here, and DM is right that its the Tournament style of too few prizes for too many players that is the crux of all this angst. But I do blame the administrators of the system, the POs are overwhelmed and not making the right calls in setting up the competition to be as fair as possible. Allowing too many cheaters to game the system, double-dipping on grants, or the equivalent of doping, fabricated grant results. Not as bad as FIFA, but IOC-like in its fecklessness.

  • Pinko Punko says:

    Lurker: NSF scale is: Excellent, Very Good, Good, Fair, and Poor

  • drugmonkey says:

    Allowing too many cheaters to game the system, double-dipping on grants

    Would you care to elaborate on these charges?

  • How exactly have these dudes determined that any reviewer has held any grant above a bar that they haven't held themselves to? Have they seen any reviewers' grants? This is just basic sour grapes: REVIEWERS WHO DON'T LIKE MY GRANTS MUST BE HYPOCRITES BECAUSE MY GRANTS ARE MAXIMUM BRILLIANT!!!!!!!!!!!!!!!!

  • drugmonkey says:

    Well traditionally they are judging the scientific accomplishments of the reviewers, not their grant applications....

  • physioprof says:

    So what's that even supposed to imply? Reviewers are required to give a 1 to any grant submitted by a "more accomplished" applicant? Or all reviewers who are less accomplished than any single applicant each cycle are kicked off the study section? I mean this sounds all tough and vertically ascending, but operationally, what's really being proposed?

  • Pinko Punko says:

    Everybody needs to blow off a little steam- especially if reviews seem lazy or arbitrary. Maybe the grant was terrible. I'd need to see it (joking!). That being said, I've seen grants that are just OK and have experienced the frustration of the applicant who probably did spend a lot of time on the proposal. I think most of us have been there. But the world is very different now than it was 5-6 years ago. I have also seen and experienced incredibly lazy reviews- reviews that I would be embarrassed to write or be associated with.

  • physioprof says:

    Yeah, exactly. Blowing off steam. So why is SnoozeMonkey reposting this twitter crappe on his shitty blogge?

  • damit says:

    Well, I can only say that during my time on SS, I saw a few reviewers who were: 1. unfair in asking applicants to meet standards they did not meet, and 2. some flat out nincompoops who should not have been there.

    And everyone knew why they were there.

    Demographics. If a reviewer lets the SRO look all progressive by checking all the boxes, including geographic location, they are prized.

  • drugmonkey says:

    What does that even mean damit? They aren't asking anyone to "meet a standard", they are evaluating the relative merits of the applications in front of them. The other applicants are setting the "unfair" standard.

  • physioprof says:

    This kind of applicant delusion would be ameliorated by service on study section. It doesn't mean fucke all if the reviewer is applying a standard she couldn't meet herself. If she is applying that standard equally to all applicants in using it to rank order their grants, then everything is totally fair.

    This is what applicants who have never served can't understand: it's not like manuscript review. It's not just your grant against the reviewers. It's your grant against all the other grants under review.

  • Pinko Punko says:

    I've seen awesome, insightful review for grants scored all over the scale- reviews that will help the applicant and the science. I've seen lazy, embarrassing review. I think the latter is increasing because of feelings of uselessness in the reviewer. The idea that the time spent is not worth it because most grants will not be funded. I don't like it, but this is what I perceive.

  • drugmonkey says:

    Grant peer review is not there to help the applicant or the science. It is there to help Program decide what to fund.

  • Pinko Punko says:

    DM, I know this. You have made that point in the past. I have a different approach and one of the reasons for this approach is that I think my critiques are likely to be less generic, and more convincing and engaged. How does a logically inconsistent review larded with stock critique help program? It could actually be correct but if it is so generic, how can it be compelling. Furthermore, one way to attempt to
    minimize bias would be to show equal engagement with all grants. CPP would argue against this because it might mean apparent time wasted. I think respect for applicants in the form of an engaged critique is one way to act against our current system. How could I meaningfully commiserate with a devastated colleague about review process unless I have my all when on the other end of the process? This is a different view perhaps, but it is the way I navigate a stressful enterprise.

    A blunt, short, but specific and incisive review to program is still more likely to be useful to an applicant than when the only thing useful to program in a review is the number. That shorts both program and the applicant. And can obscure a lot of bias, unintentional or otherwise.

  • Juan Lopez says:

    SnoozeMonkey, that's funny.

    Damnit, please elaborate on this part:"And everyone knew why they were there.
    Demographics. If a reviewer lets the SRO look all progressive by checking all the boxes, including geographic location, they are prized.And everyone knew why they were there."

    Who do you think doesn't belong in SS but for the SRO to check boxes and look progressive? Women? Latinos? Small state U?
    That was not funny.

  • damit says:

    Yeah...knew I was setting myself up for some PC backlash.
    I did not say all reviewers of (fill in the group ) should not be on SS.

    What I said, is over time I saw people who gave wacky reviews, and in most of those cases it was clear why they were there. And yes in my estimation many those didn't have the intellect, judgement, or level of accomplishment to justify their reviewing other people's programs.

  • Ola says:

    IMO, a factor that doesn't get enough attention (from CSR leadership) is the variability among SROs. On my study section, the SRO is a former scientist well known in the field before leaving academia. They hold a teleconference (actually 2 or 3) in advance of every single review cycle, to go over the rules everyone should know but seems to forget (such as "you can't score a 3 and then not leave any comments in the box"). On the contrary, I know of others on study section who have never had a single SRO call-in before the meeting, ever. I've also seen colleagues' pink sheets riddled with "scored 4 for investigator but nothing listed in weaknesses". That kind of shittio just doesn't happen at our SS - the SRO won't let pink sheets go out the door, and will often email over the weekend after the meeting to ask for clarity on a review comment.

    Re: the original comment, "lack of granstmanship" usually means something real simple - you made the figures too small to read, you misnumbered them, you used the wrong font (thereby allowing you to squeeze 13 pages of information into the 12 page limit), you forgot to actually list a hypothesis, you only had 1 aim, you had 6 aims, you didn't format the biosketch properly, you were missing some support letters, you didn't complete the vertebrate animals section with enough detail, you highlighted or bolded or underlined every other sentence, making it impossible to tell what's important, you didn't include a multiple-PI plan, you didn't include a resource sharing plan, you didn't provide enough detail/justification for a non-modular budget, your entire proposal only had a dozen literature references, etc. etc. Does any of this stuff mean your science is bad? No. Does it make reviewers want to strangle you? Yes.

  • drugmonkey says:

    Damit- please explain how "level of accomplishment" is a factor that justifies reviewing other people's "programs"? (And surely you meant specific project proposals, right?)

    As a reminder to everyone else, SROs are required to diversify their review panels on a number of factors. See : http://public.csr.nih.gov/ReviewerResources/BecomeAReviewer/Pages/How-Scientists-Are-Selected.aspx

    I will note that my experience on study sections has been that no supposedly obvious diversity factor is associated with "wacky" reviews. This includes "level of accomplishment".

  • Pinko Punko says:

    Ola- 1 million times endorse on SRO issue. My SRO is on the good side but other panels are wild west. No comments ever from SRO.

  • damit says:

    Yeah, DM, but your "experience" has apparently taught you that reference format is a big driver of scores....so I'm a little leery of your profound insights.

  • Philapodia says:

    "Yeah, DM, but your "experience" has apparently taught you that reference format is a big driver of scores....so I'm a little leery of your profound insights."

    Strawman fallacy for the loss!

  • Philapodia says:

    Or is this a reversed appeal to authority fallacy? So many to choose from...

  • drugmonkey says:

    It's just plain fail. These experiential assertions I make are not quite the same. One is an observation of reviewer behavior, the other is a personal rationale for why I think stylistic choices make it difficult on reviewers.

  • Pinko Punko says:

    I will add that since study sections need to be diverse, it probably could even seem that "wacky" comes from the "diversity" but that is foolish- because you would not be accounting for the denominator [come up with panel of all BSD CVs and you might get similar results]. Anyhow, my experience of "wacky" is that it has nothing to do with accomplishments or lack thereof. There are voices that will kill a grant because they are not effective, and this has no relationship to accomplishments. There are voices that can kill a grant because the panel member is a self-centered pr*ck- guess where these voices might fall on the riffraff/vertically ascending scale? There are voices that are homers for their fields and not engaged outside their fields- how are you going to judge them? It is like they have multiple personalities. Lazy is a problem on panels- enabled by bad SRO or disengaged chairs, but it has nothing to do with geography or diversity.

  • Juan Lopez says:

    Damit, and in your "experience" you are not a racist (or whatever applies to your discrimination choice) because you didn't say that ALL people from (fill in the group) are there only to check boxes.

    It's not that what you say is not PC. It's that it's plain racist.

    Troll.

  • damit says:

    I am neither a racist or a troll, thank you very much.

    Just an experienced investigator and reviewer who has seen the requirement for box-checking hurt ability to get qualified panels.

    FWIW....the average SRO is no prize either.

  • jmz4gtu says:

    How much time does the average study section member devote to grant related work? Is it really that burdensome that so few people want to do it?
    I would think getting a peek at the preliminary goodies would make it worthwhile. Not to mention I assume people try to suck up to you, which is always fun.

  • drugmonkey says:

    It is a lot of work to do it right. Getting a peek at what competitors are doing isn't as useful as you might think- this depends on section but for a lot of them you don't often see work that is right in your lab's wheelhouse. So not much different from attending meetings.?

  • MoBio says:

    @jmz4gtu "I would think getting a peek at the preliminary goodies would make it worthwhile. "

    Two reasons why not:

    (1) If I ever saw anything 'really interesting' I'm morally obligated to forget it upon leaving the meeting. That is, it would be unethical to 'get a leg up' based on something I reviewed or heard reviewed at a meeting.

    (2) I've done this grant reviewing gig for more than 20 years (typically 3 or more times/year) and the number of times I've actually seen anything that might be remotely 'worthwhile' (at least to me) I can count on one or two fingers.

    The one thing that is 'worthwhile' is getting a sense of where the bar is in terms of preliminary data, supporting letters and the like from the other reviewers.

Leave a Reply