Complaining about the system is well and good but remember it isn't personal.

An observation for those reading along with a comment thread that is developing on a prior post. Gummibear asserted:

I also have to add that the quality of the NIH peer review system needs an external audit. Things that are going on there are quite unimaginable in journal peer review.

It emerged that our commenter was ticked about a grant review. Surprise, surprise.

Like regularly writing utter nonsense in summary statements, with complete impunity. An example from my recent experience: a reviewer was unfamiliar with the field and wrote a whole critique full of rubbish. He/she 'luckily' went too far and devoted a paragraph to certain methodology, expressly describing my use of it as 'strange'. It was then easy for me to provide a list of literature references to identical approaches and prove that the 'strangeness' resulted solely from the reviewer's state of mind and education. So I did in an appeal.

There is a little more detail but really it is going to be hard to evaluate the specific claim of mistreatment outside of going through the grant app and critiques ourselves. Nevertheless, I like to look for the general points. I arrive at this:
It is most useful to dissociate your disagreement with an established NIH process from your own particular treatment within the process.


Because there may very well be something that is an arguable problem with the process. Fine. In the case of what Gummibear is describing it seems like it might be an instance of what I consider to be reviewers inappropriately trying to predict empirical outcome rather than evaluating the approach (not sure I grasp the subtlety here but it isn't really the point). In other cases it might be the inevitable tension between finding a reviewer who has direct expertise and one who is not a potential competitor. (Because believe you me, the complaint against the expert yet supposedly in competition reviewer is another frequent one). There are very frequently going to be people (~80-90% of them per round, these days) who think the review of their project was of poor quality.
So what?
Everybody is up against the same system. So it is not specifically unfair to you if you ended up on the short end of the reviewer quality stick. It is going to happen and frequently does. But sometimes it works in your favor, does it not?
Question: Would you be crying APPEAL!!111!! if the inexpert reviews happened to miss the glaring flaw* in your approach and you ended up with the money anyway? Of course you wouldn't. You would respond in a manner such as what R(r)evere(s) recently observed:

3. YES!!! (pumps fist in air) THEY BOUGHT IT!

As Comrade PhysioProf mentioned, there is a process.

Holmes, I feel your pain, but what you are talking about is a "scientific difference" that NIH intends to be addressed via the ordinary peer review process.
If your problem with the instance of peer review you are appealing could be addressed by explaining scientific shit to the reviewer and inducing a realization that you were, indeed, correct in the first place, then it is not appropriate for appeal. Period. The NIH Web site explains all of this very clearly, and *they* get to define their terms, not you.

Exactly. And when you call your Program Officer all outraged, s/he talks you down and makes the exact same points. A good fellow grant writing colleague should say the same thing. "Yes, sometimes review is less than perfect but the process is to revise and resubmit your grant." "Yes, the timeline for this blows and there could be other approaches (like the ones some nutty blogger suggests) but for right now this is the system." "The fact that you got a bad review is not evidence that you have been selected for special hosing and deserve special treatment."
The trouble comes in when you are unable to step outside of your narcissistic outrage to see that there is nothing special about your case. Nothing.
When you can't see this, well...that gets you put in the crazy file. And that is not a good place to be when it comes to Program. Ever heard of the fable of The Boy Who Cried Wolf? If you are in the crazy file, your chances of a getting a Program pickup or even a solicitation for rebuttal go way down.
There is also the chance that you will be unable to control your outrage when you are writing your Introduction on the resubmission you finally manage to write. This is not a good place to be either. I've seen some doozies where the application sounded much like our commenter, just convinced the prior application had received a screw-job of a review. This doesn't go over well, even when you've managed to switch study sections entirely. The reviewers may actually agree with you that the review comments were in error or at least that your rebuttal is convincing. But if you come off as a spoiled brat who thinks that s/he should exist outside the existing flawed process...you go in the crazy file. And you don't want to be there with reviewers either.
__
*Yeah. They all have problems. There is no such thing as a perfect grant proposal. Deal.

53 responses so far

  • There is no such thing as a perfect grant proposal.
    No kidding there are only two kinds, those that get funded and those that make you see white hot with rage because they didn't get funded.

  • Namnezia says:

    Drugmonkey, this all sounds like sound advice. Nobody wants to be in the crazy file, and yes specific flawed criticisms can be addressed in the rebuttal. All this would make sense if one knew when writing the rebuttal that eventually misconception A or B would get cleared up and you would get your grant funded. But after the fifth or sixth submission of your grant (which has now changed into something else since it is technically a "new" grant and you've published all of the experiments proposed in round one) and you still get inane comments - it's hard not to take this personally. Especially if you get some very technical comments about your approach and you kill yourself getting MORE preliminary data to show that the approach IS good and then someone else reviews the grant and does not even acknowledge all the fucking time you spent appeasing the previous reviewer who's comment came probably from a need to say SOMETHING bad about a grant they didn't like. It is almost better if they simply said "I don't like your grant, I find it boring and uninspiring, and I will not give it a good score no matter how technically sound and feasible it is". Then you can just pack it up and send it somewhere else.

  • DrugMonkey says:

    It is almost better if they simply said "I don't like your grant, I find it boring and uninspiring, and I will not give it a good score no matter how technically sound and feasible it is". Then you can just pack it up and send it somewhere else.
    It is always good to keep in mind that this may be exactly why your reviews don't seem to make sense or the criticisms seem a stretch. Hard as it may be to consider. This is where a good, brutally honest colleague who you trust has your interests in mind comes in handy.

  • kevin. says:

    It seems that the right way to go is to treat it like a manuscript under review. Thank each reviewer ever so much for their thoughtful and insightful review, say you've made the changes they requested, and move the fuck on.

  • Jeremy Berg says:

    The issue of providing clear feedback to applicants that an application is unlikely to have sufficient impact regardless of technical issues was addressed in the NIH Enhancing Peer Review effort. See pages 30-31 in the Final Draft Peer Review Self-Study (http://enhancing-peer-review.nih.gov/meetings/NIHPeerReviewReportFINALDRAFT.pdf ).
    "The current rating system could be enhanced by the introduction of a stand-alone category checkbox entitled Not Recommended for Resubmission (NRR) for both scored and unscored proposals. The goal of this action would be to help applicants make faster, more informed decision whether to refine an existing application or to develop a new idea. A reviewer could check this category box in the event that he or she believes that a research idea would not have the appropriate potential impact or feasibility no matter how it was revised to be competitive in the future. Study-section consensus would be required, however, for an NRR designation to be applied to any given application. Establishing and implementing this category would intend to provide clearer, unambiguous information to an applicant; however, receiving an NRR decision would not prevent the applicant from submitting a revised, new proposal."
    This proposal did not move forward into implementation because of substantial resistance voiced by the scientific community.

  • Gummibears says:

    Well, DM, I was thinking that CPP's postulate of dropping the expertise requirements with respect to reviewers was only a CPP's personal opinion. But the two of you seem in agreement. So I have to congratulate you, because you just happened to solve a dire problem for the NIH - the problem of overworked reviewers.
    If, according to your reasoning, a reviewer is allowed to write whatever garbage he/she pleases, and it is then the PI's duty to humbly resubmit and respectfully educate the reviewer, then why bother these busy PhDs at all - let us just send all these applications to high school students. They can do reviews as their science projects. Sure, they will confuse viruses with bacteria, NMR with IR, Monte Carlo with molecular dynamics (I am not making this up - this is another example from my impressive collection of NIH-made garbage), but who cares. Just resubmit and patiently, respectfully explain.
    There is not just "little" more detail in my comments to the previous blog entry. I provided a specific quote, how a reviewer EXPRESSLY states that he/she has in fact NO CLUE WHATSOEVER about something ("strange"). I contrasted this with a referenced and quoted review from a respectable source, that states that what the reviewer considers "strange", is in fact an established practice. And I wrapped the example up with the quoted NIH internal email that showed how much they care ("send the guy a template letter") about reviewers basing their critiques on evident ignorance.
    The quality of that example is beyond doubt. Perhaps a larger quantity would make my claims sound more convincing, but this is your blog DM, not mine, so I cannot post an extensive essay on the subject, supported by dozens of examples, although I would be able to provide such examples. In part this is due to the nature of my work: it is rooted in theoretical/physical chemistry, but submitted to NIH panels, dominated by biologists of various hues. This is a recipe for a disaster, if certain reviewers, QUITE UNDERSTANDABLY lacking expertise, also lack the integrity to recuse themselves.
    But this is not exclusively due to the subject. There are other factors at play. It is by no means true that "Everybody is up against the same system." Come on, you frequently discuss here the case of new investigators, right? So you have one bias. But there are other biases too. The geographical location. The "fame" factor. The existence of these biases is not just claimed by me. Please refer to the study I referenced in the previous discussion: "Peerless science: peer review and U.S. science policy" by Daryl E. Chubin, Edward J. Hackett; State University of New York Press, Albany NY, 1990. Page 59 - the perception of biases among PIs (and yes, the issue of being a successful or unsuccessful applicant is accounted for and discussed). Page 66, where the mechanism "decide first, justify later" is presented. So please, stop alleging that it is just a disgruntled, frustrated, narcissistic wannabe PI ranting. People have been doing STUDIES on the circus called "peer review"! This was 20 years ago, but little changed, despite the lip service called "improving the peer review". Introducing technical changes (the scoring system, review templates) is insufficient if the major, cultural factor persists: the culture of PERMISSIVENESS toward substandard reviews.
    Such culture is largely absent in case of journal reviews, and I honestly see no reason to apply more lax standards to grant reviews. On the contrary, because the impact of a faulty review of a grant application is more severe, and because it is taxpayers' money that is distributed, the standards should be higher.
    DM, CPP: I respectfully, but strongly disagree with your stance.

  • Holmes, I can't speak for DoucheMonkey, but the only "stance" I am taking is in relation to the nature of reality. And the reality is that you can whine all you want, but it doesn't make your complaint the proper object of an appeal of your peer review outcome, as the appeal process is defined by NIH, not by you.

  • Anonymous says:

    J. Berg: This proposal did not move forward into implementation because of substantial resistance voiced by the scientific community.
    This illustrates exactly what is fundamentally wrong with the NIH: The NIH and CSR particularly has somehow managed to delude itself that it needs to be responsive to the will of the scientific community, rather than the other way around. No wonder Collins is cutting RPGs next year (http://blogs.sciencemag.org/scienceinsider/2010/02/parsing-nihs-20.html). CSR is so owned by the study sections that it doesn't matter what the director of the NIH wants since his wishes just get blown off for business-as-usual crusted-over more-of-the-same by grant reviewers.

  • arrzey says:

    I'm an older, lurking, prof at a MRU. Full disclosure: I am NIH funded, but run a small lab. I probably write 3-4 grants for each one I've had funded. I sat on study section for 8 years, and still do ad hoc. The bottom line is that the world is a cold hard place. Its full of rejections, and I'm always puzzled at why some folks seem to think the system is broke 'cause its not giving them exactly what they want when they want it.
    There is not ever going to be enough money for research. There are lousy reviewers, but there are also ones who get it. If you think that lousy reviewers doesn't exist in the journal culture, you're wrong. But its YOUR choice to live in & work with the system or to get out & find another job. No one holds a gun to your head and says "write this grant or I'll blow your brains out".
    Complaining doesn't 1) get you funded 2) change the system 3) endear you to people who are working their asses off.

  • whimple says:

    Heh. Obviously diatribe #8 was mine.

  • physician scientist says:

    Gummibears...you've got to get over this. We've all gotten bad reviews from study sections that do not fit the work. The key is to really take time before you submit the proposal - talk to some POs who might be interested, ask them what might be an appropriate study section. Don't just send in the proposal and "hope" it goes to a study section that will understand the work.
    Having served on 2 study sections, I was pretty impressed by the fellow reviewers. They often brought up points I hadn't considered, and overall, I think the system works pretty well. The bottom line is that there isn't alot of money going around and many good proposals won't be funded. I doubt very seriously that you're rejected proposal was any better or worse than many of the people's on this board whose grants weren't funded. Appealing is a dumb move. They never work and just set back the subsequent revision timeline. Also, harboring anger at the system is incredibly counterproductive and will likely make your subsequent grants worse. I very strongly suggest focusing on the future and not dwelling on the past.
    Quite frankly, your post about the disagreement sounds really boring and esoteric to this translational scientist, and I think you probably need to better explain the approach and why we should spend tax money on it rather than filing appeals and ranting.

  • Anonymous says:

    “The NIH and CSR particularly has somehow managed to delude itself that it needs to be responsive to the will of the scientific community, rather than the other way around”.
    Wimple,
    The NIH and CSR need to listen to the scientific community and be receptive and responsive to their ideas and needs to carry out those ideas that make science exciting and productive for the public good.
    arrzey #9 said: “There are lousy reviewers, but there are also ones who get it.”
    I say that in the scientific community there are excellent scientists who get it and irresponsible ones who don’t seem to be able to go beyond their exclusive self-serving interests. That demography will not change. And we all know it. So, what NIH and CSR need to do, after listening to the scientific community, is judiciously decide and act on the issues that are essential to science and its public funding.
    For NIH and CSR:
    1) Listening and be responsive
    2) Courage and determination to act
    For those who are never happy, unless they always get their own way, there are other ways for personal fulfillment. Let them go and have them fulfilled their aspirations.

  • anon says:

    Gummibears - it sounds to me like your work is cross-cutting, "the nature of my work: it is rooted in theoretical/physical chemistry, but submitted to NIH panels, dominated by biologists of various hues." Not that it's a bad thing, but you will never find a reviewer who has expertise in every area that you're playing in. I have this same problem. One person advised me to write the grant as though it IS for high school students. Dumb it down, simplify the language, and see what happens.
    Older members of the NIH club don't have to do this. They say what they want and get funded. It's a membership perk.

  • Namnezia says:

    One person advised me to write the grant as though it IS for high school students. Dumb it down, simplify the language, and see what happens.

    I got similar advise by a senior colleague. She made an analogy of a lawyer presenting a case to a court. She said writing a paper is like preparing a case for the supreme court, where it will be considered (if at all) in detail and all of its subtleties and complexities will be considered by an intelligent panel of judges who are all experts in constitutional law. Writing a grant, on the other hand, is like preparing a case for a county court in the middle of Oklahoma. There's no room for subtlety, the case needs be black and white and will likely be reviewed by a jury who is not necessarily best suited to review the case. (no offense to Oklahoma)
    I agree with this advise, but not necessarily for the same reasons. If you look at the rosters of the CSR study sections, you will find that most reviewers are not necessarily the top people in their field, but this doesn't mean they are not good reviewers. However, think of your grant being read by an overworked, overtired reviewer who has just read 10 others and is given the task of somehow ranking them. While they are all somewhat related to her field, some are a little bit off, and then coming across a grant in which she is asked to understand some Monte-carlo shuffling of amino acid sequences or what not - well you can imagine what the review is going to be like.

  • another young FSP says:

    It is our job as the proposal writers to explain why our approaches are feasible, and to demonstrate our ability to carry out our experiments. This is what is frightening about the 12 page proposals - in the longer ones, we could show a figure for each of our "strange" methods, while in the shorter ones we have a few words to make the case that our usage of the experiment is proper.
    Proposals aren't written for an expert in your precise specialty area. You have to hold their hands and walk them through your work to convince them you've gotten all the problems under control. If they don't understand a method that you proposed, it's usually your fault for not explaining it well enough in the original proposal (and I've been the "you" in that scenario in more than one of my proposals!)
    The first read of reviews from a rejected proposal hurts, and it is very difficult to respond rationally to them. Set them aside for a week or two, and come back to them after the sting is gone. You'll generally find either useful critiques of your work, or useful critiques of how you are presenting your work. Both of these types of critiques are useful for refining your next proposal.
    The NIH has a limited pot of money available, and there is a lot of good science that just can't be funded. The purpose of a proposal is not just to tell the panel what you want to do - it is to persuade the reviewers that your science is better/more valuable than the next 10 proposals in the stack.

  • Anonymous says:

    "most reviewers are not necessarily the top people in their field, but this doesn't mean they are not good reviewers"
    Just curious to know your criteria for "top people in their field"

  • Forward says:

    "It is our job as the proposal writers to explain why our approaches are feasible, and to demonstrate our ability to carry out our experiments. This is what is frightening about the 12 page proposals ".
    I'd agree with you except that I thought that there is the attempt to place the emphasis more on innovation, significance than having to explain the specifics of a method (s) and so the rationale for a 12 pages proposal. Are you really a young FSP ?.

  • Gummibears says:

    I think some good advice coming my direction applies to a scenario I am not really talking and complaining about. But let us start from the beginning. A significant percentage of reviewers is characterized by the approach "decide first, justify later". On a personal note, I am not an exception with my journal reviews - I usually read the abstract, quickly browse through the manuscript and form a preliminary opinion. Then try to justify it.
    This labor-efficient approach borders on unethical, but if it really becomes unethical, that depends on what happens next. There are two scenarios possible:
    SCENARIO 1. The reviewer reads the proposal or manuscript more carefully, and verifies if his evaluation of the scope, impact, approach, experimental details and whatever else, can be reconciled with the initial assessment of the quality of the proposal/manuscript. Priority is given to the integrity of the evaluation. In some cases, if the initial assessment cannot be supported, that preliminary decision is changed (happened to me on a couple of occasions, both directions).
    SCENARIO 2. The reviewer reads the proposal or manuscript more carefully, and searches for any and every excuse to justify the initial decision. The quality of the evaluation is totally subordinate to supporting the already made decision. Arguments necessary for achieving the goal are constructed without regard to their scientific soundness, and occasionally even fabricated by misrepresenting the actual content of the proposal.
    I am not at all griping about scenario 1. As long as the justification is meritorious, the reviewer's intent does not matter. And I write my applications in a manner that makes them easy to understand even for an unprepared reviewer (tested on colleagues from different disciplines), so, I actually saw many reviews written by people clearly lacking the necessary expertise, who nevertheless had no problem understanding the proposal.
    Reviewers of Type 2, however, cross the line of unethical conduct. It is often difficult to catch them, but not always. Thanks to the nature of my work (and its low overlap with the typical area of expertise of an NIH reviewer), I can with high confidence tell which scenario happened - more often than not negative evaluations are supported by TOTAL BULLSHIT. It does not matter at all to these reviewers what is actually written in the application (so the good advices regarding grantsmanship are nice, but largely useless). They just want to reject, and damn the quality of the justification. I am griping about such reviewers, and ESPECIALLY about the tolerance of the community toward them, even when the case is evident.
    P.S. On what basis reviewers of Type 2 want to reject? Read about various non-scientific reasons and biases in the book I referenced. You can read parts of it on google books, if you don't want to bother with a trip to the library.

  • another young FSP says:

    Forward:
    Yes, I'm a young FSP, although not THE FSP through a time machine or anything.
    That's what they say they want - but on the local panel I sat on that tried to apply the new NIH scoring techniques on 12-page proposals (for much smaller grants!), most of the deliberation was on practicalities anyway! You have the innovation/significance sections, but the only one that matters is the overall score, and that can be completely unrelated to the scores on the individual elements.
    This is why all the young investigators I know are terrified of the 12-page proposal - we know perfectly well that we'll still be judged on whether reviewers think we can accomplish our proposed work, but since we have limited space for preliminary results, this evidence will have to come from publication record in our independent position and letters of support. And it will place a lot more stress on people who are past the 2-year window where reviewers don't expect independent publications, but who are not yet at the publication surge that comes out in some fields in the 3rd-4th-5th years.
    These fears may prove to be unfounded, but we are far too early in the new system to judge how it will work out.

  • Namnezia says:

    Anonymous - "Just curious to know your criteria for "top people in their field"
    This isn't that hard to figure out, you can look at (1) the quality of their science, (2) history of productivity, (3) the influence that their scientific findings has had on the field in terms of determining future direction and changing the way people think about a given topic, (4) peer recognition, (5) their ability to stay at the cutting edge of their field, etc. Not all "bigwigs" are at the top of their fields, and not all scientists at the top of their fields are "bigwigs".
    Gummibear - I agree with you that once a reviewer makes up their mind they will find anything to justify their rejection. But sometimes there are intangibles that cause your grant simply to not be interesting to the study section, no matter how solid the methods are. I learned this lesson the hard way, because I could not understand why my grant kept getting triaged over extremely trivial or erroneous criticisms. One of my senior colleagues told me that it didn't matter what I wrote, it was clear that that study section was NOT going to give this project a decent score, period. I wish the PO of my program had told me that earlier. In any case, I looked for a different audience, sent it to a completely different study section and fared much better. You can even try a different funding agency (eg. NSF).

  • Reviewers of Type 2, however, cross the line of unethical conduct.

    Dude, get a fucking grip. It is *your* job to get the reveiwers excited enough about your application that they don't decide to ding it for what you perceive as "ticky tack" reasons. You can nurse your paranoia, or you can work at writing a compelling application. Your choice.

  • Anonymous says:

    Thanks "another young FSP" for sharing your experience and fears. I truly hope your fears don't get confirmed. It is good that you're voicing them for it may help in preventing them to happen.

  • DrugMonkey says:

    These fears may prove to be unfounded, but we are far too early in the new system to judge how it will work out.
    These fears are not unfounded and PP and I have been hammering away at this since forever (in blog time).
    It is my position that until and unless the CSR goes directly after reviewer behavior with instruction and on the spot correction (think SRO mantra about "the F word") it is going to be a long and bumpy process. In the mean time, newer/younger applicants are going to take it in the teeth.

  • Gummibears says:

    CPP - it just seems to me that you are part of the problem. Thank you for being a "live" example of what has happened to the professional ethics in science.

  • Haven't written a grant, but I did write an unsuccessful fellowship application. Did I bitch and moan about the powers that be, yeah for 5 seconds. Then I got right the fuck back to writing a better proposal. I learned that I had to be the fucking Billy Mays of science, I had to make them believe that my project is badass. I got input from experts in the field as well as folks outside of my field. I think this made the difference and what eventually helped to get my proposal funded. Yeah the system sucks, but crying like a little school boy bitch isn't going to endear you to everyone else. You are not the only one swimming against the tide. So stop whining and start fucking swimming.

  • Gummibears says:

    I think many of you guys miss the point. I perfectly well know the current state of the system, and I act accordingly to secure my own funding. I do not need prompting or good advices, but freely given ones are, of course, welcome with gratitude (thank you, Namnezia, this is exacrtly what I am doing now).
    I do not keep my head in... some dark place, however, and pretend that the system is OK. I am vocal about if, because we should at least try to eliminate its worst faults. It should be quite obvious why (hint: money for scientific research does not come from a printer in CPP's basement; the level of public support for science in the long term depends on the quality of our work). So I bitch, because bitching is the right thing to do.

  • another young FSP says:

    Don't get me wrong on this - I do think that whether or not the research group can accomplish what they propose is an incredibly important part of grant review. The sums of money going into each grant is far too large to not consider that as a major part of the process.
    And innovation and significance, while important, are also not the be-all and end-all. You can have an incredibly innovative grant that is completely ridiculous. You can have a very pedestrian grant that fills in essential gaps in our current understanding of a system and absolutely deserves funding.
    So I support what the NIH is trying to do with the new scoring system, and I support what they are trying to do by reducing the length of the proposals. I just think it will be a very shaky transition while we all figure out how to play by the new rules!
    With a segue into a response to other comments: one thing to remember when complaining about NIH study panels is that these are your colleagues doing the reviewing. This will presumably eventually be all of us doing the reviewing. Because we post off our electronic submissions and wait months for results, we tend to picture faceless oppositional panels. Really, it's a group of people doing their best in difficult circumstances. I'm certainly not perfect - I don't expect the study section to be perfect either. But they seem to do a pretty good job overall.

  • whimple says:

    But they seem to do a pretty good job overall.
    This is a crucial assertion. How do you know they/we do a pretty good job overall? What are the metrics for evaluating the quality of the review? Is there actual data on how good a job peer review does when measured objectively, or just self-congratulatory back-patting?

  • another young FSP says:

    Self-congratulatory back-patting, of course!
    Anecdotally, the reviews I have gotten and the reviews my colleagues/collaborators have talked with me about are themselves reviewed as relatively fair by the people who receive them. The single largest methodological complaint from those I've talked to involves fields with a split - either about the best implemented methods, or the way a system works - in which a reviewer from the other side of the split marks the proposal down. The usual solution on this seems to be re-writing the proposal giving due lip service to the other side of the argument.
    The other primary complaint is from other young investigators who get back reviews criticizing the lack of published papers from their independent labs, and saying that grad/postdoctoral publication records cannot be projected forward. This seems to hit worst in the 2nd to 3rd years of the lab - for most people I've talked to, the first R01 submission or resubmission.
    Now, the types of complaints I hear are probably biased by my own community (other people in my career stage, and people I work with). But I do think it is telling that people I've worked with on collaborative proposals often spontaneously offer that the revised proposal is a stronger one than the original submission. And most people I talk to eventually land a grant.
    I don't really see any way you can derive objective metrics for review quality across all of the fields covered by the NIH or NSF. It's the same problem you get trying to set a completely objective funding or paper count metric for tenure across disparate disciplines. What constitutes success?

  • physician scientist says:

    If I ever get gummibear's grant, I'm going to give it a 1 just so he'll stop whining!

  • Gummidouche, you and your fellow disgruntled paranoiaics, like Fucklin and Young Female Scientist, are always ranting about how the "system" is "broken" and how all this great fucking science is being totally squelched. Where is your evidence--beyond your own grievances concerning your own personal failures--that the scientific enterprise is "broken" and that there is substantial scientific progress that is not occurring because of it?

  • qaz says:

    Namnezia #14 - I don't know what field you're in, but every study section I've been on has been made up of some of the top people in their respective fields. While it's certainly true that some of the specific reviewers were not always experts in the field, in my experience, at least one of the reviewers generally was. The myth that study section is made up of second-rate scientists who have the time (under the assumption that first-rate scientists are too busy) just doesn't fit the study sections I've seen (from either side).
    And Gummibear - there are lots of problems with the NIH system. The fact that you got a bad review isn't one of them.

  • Neuropop says:

    The study sections that reviewed my submissions generally had some of the top experts in the field, sometimes as ad hocs. The reviews reflected this and quite painfully so, sometimes. All said and done, the proposals did become stronger, but the science proposed in the original submissions is pretty much what I ended up working on. So, the question is, is all this grantsmanship just to convince the study section of positive outcomes?

  • DrugMonkey says:

    Neuropop, you are singing my song. Totally agree that the revision process probably does not change the science that will be done in most cases.

  • qaz says:

    Neuropop #33 -
    I agree completely. What we have is a classic description of an arms race - too many applicants for too few slots leads to having to learn something unrelated to the actual goal (here grantsmanship versus science) to achieve the goal.
    DM -
    It would be an interesting question whether anyone has actually changed the planned experiments because of grant reviews.
    On the other hand, I support the revision process because without it, you end up in a lottery each time (like what is seen in the NSF system). The revision process ends up being like a queue and giving you a sense of whether the grant has a chance in the next cycle. (I know its officially not supposed to work that way, but it often does.)

  • whimple says:

    It would be an interesting question whether anyone has actually changed the planned experiments because of grant reviews.
    I can provide personal data on the reciprocal point: proposals that the study sections did not like when done anyway produced good science and quality publications.

  • Neuropop says:

    #36 I second that. A proposal that struck out thrice, redone and resubmitted and funded A1 ended up producing 3 nice papers (one featured on NPR) from just the first aim of the original submission. The new findings from the remaining Aims ought to shake up the field. Now what was wrong with the original submission? In NIH's defense, the proposal (for legacy issues) did end up in an orthogonal study section.

  • crystaldoc says:

    @19: "And it will place a lot more stress on people who are past the 2-year window where reviewers don't expect independent publications, but who are not yet at the publication surge that comes out in some fields in the 3rd-4th-5th years."
    Does evidence exist from the last couple of years for this supposed 2-year window? A colleague just returned from his first stint on a standing NIH study section, and had been surprised that productivity was a complaint discussed for every new investigator without at least 2 pubs from their independent lab-- it didn't matter how many years they'd had. No new investigators' grants were ranked anywhere near a fundable level without at least 2-3 pubs. My colleague indicated that the productivity criticism did not necessarily show up in the written reviews, but was a major factor in panel discussion. Mentioning this to a more senior colleague, he was told that was how it had always been. I am wondering whether the common advice to submit grants *early* and often may be counterproductive, in a situation where effort writing grants prior to independent publications would be better spent focusing on the pubs.

  • pinus says:

    I think that the expectations for productivity are very study section dependent. The study section that I have submitted to seems to give new PI's good scores if the science is up to snuff.

  • DrugMonkey says:

    I think that the expectations for productivity are very study section dependent.
    This.
    My experience is that sometimes it would come up, sometimes not. If a new PI had publications in the first 2-3 years from her new lab, this was definitely viewed as a plus, not just an expectation. I can recall some times when a reviewer tried to go down this road and was rebutted by at least one other person. This doesn't prove the mean score didn't still hinge on that factor but it was by no means a head-nodding universal expectation that the PI would have pubs in 2-3 years.

  • My first two awarded R01 applications were each reviewed before my lab had published a single paper. In the study section I regularly serve on, new investigators are *not* assessed on productivity of their new labs, either in review or in discussion. If anyone brings this up in discussion, they are quickly smacked down by other members. They are, however, assessed on their productivity as post-docs.

  • Namnezia says:

    I am wondering whether the common advice to submit grants *early* and often may be counterproductive, in a situation where effort writing grants prior to independent publications would be better spent focusing on the pubs.

    I tend to agree with you, crystaldoc. Based on the comments by others in this forum this does not seem to be a universal expectation but it definitely appears to be in some study sections. In my experience, I was getting grant reviews saying that I had no evidence of independent productivity even if I had had my lab for less than 18 months, and despite being productive as a postdoc. Not until I had three independent publications did these comments cease in the grant reviews. Maybe it was just this one reviewer, but if the comment remained, to me it implies implicit agreement by the others in the study section.

  • anonymous says:

    " In the study section I regularly serve on, new investigators are *not* assessed on productivity of their new labs, either in review or in discussion. If anyone brings this up in discussion, they are quickly smacked down by other members. They are, however, assessed on their productivity as post-docs."
    I think that this is the way it should be for all study sections. There are criteria that should be applied equally and in here there is obviously an element of unfairness that NIH should not allowed.

  • DrugMonkey says:

    Namnezia@#42 and Crystaldoc@#38-
    1) The fact that comments about productivity appear does not necessarily mean that your grant success hinges on this factor.
    2) Once you push "go" on the 5th, what else do you have to do for the next couple of months save write your papers? Assuming you have the data / data stream running.
    3) Even if the review of one or more of your grants *did* hinge on your lack of pubs, you are still accruing benefits of preparing and submitting a grant so that once you do get your publications out, those other factors will be aligned in your favor.

  • DrugMonkey says:

    There are criteria that should be applied equally and in here there is obviously an element of unfairness that NIH should not allowed.
    "unfairness"? "equally"? how so?
    Suppose one PI is just incredibly hardworking, has an incredibly supportive environment and gets a paper out. That's good. Suppose the study section sees a half dozen of those and criticizes the 7th app for not being as productive. Isn't this "fair"?
    Alternately suppose two strategic approaches- that of incremental publication in modest journals and that of GlamourMagzOrBustz. Reviewers can choose to value either or both approaches. Is it "fair" to favor one of these over the other? Which one is "better" and by which objective means have you arrived at that conclusion?

  • anonymous says:

    #45 DM
    The examples that you mentioned do not correspond to the discussion we were having. You're talking about publication merits. Of course if candidate #1 has 0 publications and candidate # 7 has 3 pubs, it is not unfair to consider the difference in the merit. But is that the main criteria to award a grant or not to 2 different new investigators ?. If the proposal of candidate #1 is much better, the number of publications should not prevent him from getting it.
    Maybe I did not understand the discussion before.

  • DrugMonkey says:

    possibly the same anonymous @#43 said "I think that this is the way it should be for all study sections. " in response to CPP's comment that productivity was (apparently) never appropriate in his section.
    I took this as applauding a universal rule. I was trying to point out that there are too many different circumstances for universal rules and that opinions vary as to what is considered fair game in grant review.

  • anonymous says:

    Well, if productivity (= number of publications) overrides the excellence of a proposal in a study section but not in a different study section, I consider this to be application of the rules that leads to unfairness. Just my personal opinion.

  • DrugMonkey says:

    Perhaps anonymous, perhaps. In a certain individual outcome sense. But consider what I have said about universal prescription. The fields funded by the NIH vary tremendously in terms of how excellence is scored. Suppose the universal prescription happened to be one that very much disadvantaged your particular Biosketch? Would you be singing praises of the "fair" system? Or would you be kvetching about the system being categorically stacked against your type of science?

  • Anonymous says:

    DM@#44:
    Point 3-- I agree there is a learning curve and grant-writing practice is indispensible, but I also think the "submit early and often" or "buy more lottery tickets" mantras can be taken to extreme, warping research priorities and wasting time and effort.
    Point 2-- "Assuming you have the data / data stream running."
    yes. The warped priorities alluded to above can lead to chasing after the minimal sufficient preliminary data to support 3-4 different grant concepts, and given the limited resources and personnel effort available to most new investigators, the result can be failure to follow through with polished figures 2-7 following along a single line of research to produce a cohesive publishable manuscript.

  • crystaldoc says:

    sorry, that last was me.

  • DrugMonkey says:

    yes, I agree that it is a tricky balance for new Asst Profs to balance their limited time, energy and resources in the grant and data/paper pursuits.
    I am shaped by what I saw, at least a few years ago, as this mantra to newer investigators to go slow. generate data and publish papers first. get a small award. then try for an R01 next. it didn't make sense to me. So I push in the other direction. This should not be misconstrued as me saying that you don't have to produce. It is more that some degree of production is essentially assumed by me and that "that next paper" is not going to make the difference in funding.

  • Tex says:

    DM - this has been an interesting read for someone at a smaller (not MRU) institution that has been funded by NIH (RO1), but not served on SS. I would be interested to know your (and others) opinion(s) on the weight that is given to the institution, aka, environment. In other words, how important is your ZIP code?
    Thanks.

Leave a Reply