GrantRant II

Jan 04 2013 Published by under Grant Review, Grantsmanship

You know when you are faced with using somebody's crappy bit of code but you could just write the whole thing from scratch? But then you'd have to 'splain to the boss man why you spent all that effort doing by a totally different way? And the client will be pissed....and the coding team will be pissed...and basically it all just sucks ass. But you can't bear to let the cluster borkage mess exist and still call yourself a professional?

Reviewing a revised NIH grant that you didn't review for the original submission is a little bit like that.

28 responses so far

  • dr24hours says:

    I feel the same way about manuscripts. Hate that. But do you want to be the asshole who destroys someone's A1 because the A0 reviewers did things differently than you would have? Especially if the applicant has effectively responded to that review?

  • DrugMonkey says:

    NIH Grant review is explicitly not about improvement over past version. It is about relative ranking *in this round of review*, anchored/calibrated in minor degree to a larger population of grants (say, all you've ever reviewed in your lifetime).

  • Pinko Punko says:

    If there are major holes in the grant I don't consider that the applicant has successfully responded. I feel bad for the applicant, but grants are not manuscripts. You have to have some sort of conviction in what you write. You have to both cover your ass on the initial reviewers but firm the document up for general review- you are not sneaking something by the panel, and you have to know that you might not get even one of the same reviewers again. This may be threading the needle, and in some cases difficult, but that is what we have to deal with. The reviews that are most troubling are the generic ones with the most boilerplate language. For these I think the applicant is responding to a lazy review that takes them away from the specific strengths and weaknesses of the proposal.

  • DrugMonkey says:

    The flip side Punko is that StockCritiques can be given StockAnswers 😉

  • Interesting analogy - and one that I wouldn't have expected you to make; do you do some computational biology as well? I always pegged you as one of those experimentalists who are proudly ignorant of coding and see "real science" to just be bench work.

  • DrugMonkey says:

    My seventh grade science fair project involved dialing in to the local campus mainframe dude. Does that help?

  • Pinko Punko says:

    Yeah, but if the applicant thinks that StockAnswers™ are going to be sufficient, that is where cold reality jumps in in the second round. My feeling is that not enough colleagues have critically examined the document. I generally get bummed out when I see these situations, that is for sure.

  • Dave says:

    What do you expect the applicant to do in the face of such stock comments? What is the point of an A0 review if all you are doing is wasting the applicants time and setting them up for ultimate failure at A1 time, when someone actually bothers to give it a proper review and, subsequently, kills the grant for good.

    It makes no sense to me that A1s are not sent back to exactly the same reviewers. Dare I say it, but it seems like the only fair thing to do.

  • Drugmonkey says:

    Dave- you need to nail it down into your hindbrain that review is about the quality of the current application versus 1) the rest of the grants on that person's pile, 2) that reviewer's past history of grants and 2.5)the rest of the grants in that section, that round. only about 10) relative to the prior version.

    any upgrading of 10 to higher priority is a bonus but you are a fool if you play to that as your primary goal.

  • Drugmonkey says:

    PiPu-
    Stock Answers are a necessary but not Sufficient condition.

  • Eli Rabett says:

    OK, if Eli were a program manager he would send resubmissions to both old and new reviewers.

  • Dave says:

    Yeh, DM, I understand that, but it is still an odd way of doing things for me.

    The problem I have is that reviewers by definition are subjective, with their own set of biases, pet peeves, stock critiques etc. We all know this because following an A0 submission, one will often get similar but not identical comments. This is all fine and dandy until you realise that your revisions will go to a whole new set of reviewers, who will look at the app with a new set of eyes. It is realistic to expect that these new reviewers will find new problems and may even dislike the changes you made in response to the orginal reviews, therby repeating the whole fucken dog and pony show over and over.

    Shifting the goalposts.

    Let me ask you this: what is the reason for NOT sending it back to the same reviewers? Is it purely logistical or is there actually thought behind it? And if the A0 comments are just "advice", why require a formal response to them as if it is a manuscript? Who is holding the A0 reviewers accountable if they can't be arsed to do a proper job in the first place?

  • eeke says:

    Isn't the grant reviewed and discussed by a whole fucken study section? If one reviewer is way off-base, wouldn't this person's comments be left out of the review summary? I know there are three main reviewers, but I thought the grant gets scored by a group of people. Are you saying that if one of your reviewers is a fuckwit douchebagge, their "advice" should not be followed in a revision, and then maybe you should indicate reasons why in your response? Or, are you complaining because the remarks of the previous study section are clearly no match for your staggeringly insightful reviewing skills? Just trying to get the take-home point here.

  • Drugmonkey says:

    logistics has a big role Dave. revisions can come in one round later or a year later. empaneled members rotate off in Summer, new ones rotate on in Fall. ad hocs may be called but have to cover a range of what is in that section for that round. Person A may be great for all the extra coverage an SRO needs this round but be essentially limited to one or two apps the next round. the SRO job is not an easy one in my view.

    eeke- actually good and fair minded reviewers can disagree substantially. so it may not even be actual fuckwittitude. BUT, see the outrage from Dave here- I have at least a *smidge* of a feeling I should respect some aspects of the previous review and the applicant's response to it. when you think that has pointed the application in exactly the wrong direction from the starting point....it's hard.

  • Dave says:

    DM: what else was the applicant supposed to do? The sad thing is he/she has to start over again (I'm sure you won't be giving a fundable score) because a reviewer gave shitty advice which the applicant took seriously. It stinks IMO.

  • Grumble says:

    " when you think that has pointed the application in exactly the wrong direction from the starting point....it's hard."

    Not necessarily. Go read the original grant - you have access to it, right? Do you like it better? Then the A0 reviewer was an idiot and you should just discount what was done to the grant in response to his/her comments.

    Because, you know what? It doesn't fucking make a difference what the reviewers say, in terms of the actual experiments that get done if the grant gets funded. So if you like the earlier version and trust the PIs to do some good science, give them a good score.

  • DrugMonkey says:

    Nope, you don't have access to the prior version.

  • DrugMonkey says:

    Dave-
    If the grant were knock-down awesome, the quality of the response wouldn't matter. In this era of sub10% paylines... Might not matter much anyway.

    So what "stinks"? Should a quality response to review on a workmanlike grant be prioritized over an exciting idea?

  • Dave says:

    Well, maybe, maybe not. But some steps should be taken to avoid the whole situation n the first place. Lack of reviewer accountability is what stinks. As you mentioned, in this world of 10% paylines, these things surely DO make a difference DM.

  • DrugMonkey says:

    What is "reviewer accountability", how do you know it doesn't exist and what a stem would you put in place to improve the situation?

  • Dave says:

    Well if the SRO takes the role of a good journal AE and directs the applicant to ignore certain comments, or pay special attention to comment x etc, that would help. My (admittedly limited) experience is that the summary of the discussion is rather passive in this regard. You can often read between the lines, but not always. Read some of the review files at EMBO J to see how involved the AE is in directing the author response (most of the time).

    Often the PO is next to useless in providing guidance beyond "read the summary statement" and "revise and resubmit" and that could be improved also.

    When a reviewer sucks, the applicant should not suffer. Thats all Im saying. Hardly controversial.

  • Anon says:

    In practice, it seems to me that different study sections have widely varying approaches to these things. I've heard the SRO "spread the scores" mantra (although in my panel we're pretty impacted). I agree that the A1 is going to be compared with others in the current pile, but it seems that an A1 will often score a little better than a comparably meritorious A0. And I've spoken with panel chairs who insist that they will steer the discussion of an A1 toward "did the PI address previous review comments appropriately" and away from tangents that new reviewers may bring up (i.e. not allowing new reviewers to shift the goalposts). Some SROs will have great notes from the meetings and be able to give insight in person that they couldn't put in the summaries, others are not very helpful at all. One of my big questions lately has been how can I find the good study sections (i.e. those that are run in a way that I like and that will like my work)?

  • Pinko Punko says:

    People seem to be forgetting that you don't just have to "do what the reviewers say" and you will get your grant. It isn't a contract. Many times, a first submission may not be discussed, so a BS point from an off-base reviewer will not be shot down. In your response you politely stick to your guns and do your best to improve the grant (which is number 1) and maintain the perception that you are responsive to the reviewers' concerns (number 2). It is not a "do x, get y" situation as in a paper review. You can't look at it that way- that is foolish. If you think a reviewer telling you to do something does not improve the grant or in fact makes it worse or possibly opens the grant up for criticism, either don't do it or build in your anticipation of criticism for why it is worth doing. If you don't believe in your grant but are just trying to make someone else happy, you will likely fail. You have to play to the crowd somewhat because you goal is to do the science you want, but you should never assume that you score has to go up when you revise your grant. It probably should, but it doesn't have to. Subjectivity means there is noise in the system so you need to deal with that.

    I think people in this thread may be worried about a straw situation and a lot of it comes from how well you can read between the lines. You are worried about unfairness- it does happen sometimes because sometimes people get reviewers that are right in the field and know everything and sometimes they don't, but the reality is that for good study sections, it should even out. Grants can appear to have the same scores and some similar comments, but how carefully you read the summary statements can tell you what the reviews really mean.

    Two triaged grants- seems like the scores are similar. One appears to be mostly dinged on significance, with approach most likely following that score magnetically, while the other mostly seems like approach was driving. You read the words, and what you see is that the reviewers were completely bored with the first grant and just wanted more data for the second.

    And to whimple, there is no "trust the lab" anymore, and you don't want that as a criterion either- if the grant is vague and opaque, you give them a fundable score based on their track record, someone who writes an excellent grant will get hosed. If one giveth, one taketh.

    Dave, your grant has to be good enough to be discussed for the system to really take care of a bad review. And bad reviews can go both ways. Here is a story- some grant gets initial scores like 8-2-6. The "2" dutifully reads the other reviews during the review period and realize they missed the boat. They adjust to 6. The system dealt with a bad review.

    If the opposite happens- 2-9-2, a good SRO might include this in the discussion list even if outside the top 40% after prelim scoring, because there are clear differences in reviewers. The system will deal with that.

    It isn't a perfect system, but at 10% funding levels, we can't really talk about how well it works. We are in the arbitrary range.

  • DrugMonkey says:

    Nobody believes, ever, that the 2 was the error of fact made by incompetent and biased reviewers from some institution they've never heard of, PiPunk.

  • DrugMonkey says:

    Anon-
    You are absolutely right that cultural emphasis and tone of study sections vary tremendously. They can also morph over time. The only solutions are 1) talk to people on the panels when you see them at meetings 2) talk to your peers who are also targeting that study section and 3) submit a lot of grants there (ok, ok and talk to your PO after each meeting where you had a grant under review. Even if you got triaged*)

    *the PO may blow you off with "it wasn't discussed so I know nothing" so be prepared for that.

  • gri says:

    I believe that a A1 should go to the original reviewers. The applicant is addressing THEIR comments and they will be better suited to see if the grant improved in their minds. Otherwise you will get comments like"The applicant responded well to the previous concerns but I have now these new concerns..." from new reviewers. I have it seen happening. A0 within range of possible funding but payline wasn't set yet so PO suggested to resubmit to be safe. The A1 was going to a different set of reviewers - and was triaged. The A0 got ultimately funded at the end but this shows how deadly a new set of reviewers can be. As scientists are they all want to bring in their own unique and new comments and show off. Including those wisdoms that certain procedures would never be done clinically and this is a major flaw of the application. Too bad that the said procedure is the gold standard but oh well, he showed his "wisdom" (Rant end)

  • drugmonkey says:

    The applicant is addressing THEIR comments and they will be better suited to see if the grant improved in their minds.

    Why do you imagine that this is the only stick by which merit is judged?

Leave a Reply