Your Grant in Review: Impacts of the New Scoring System

Jun 12 2009 Published by under Grant Review, NIH

Having completed my first go-round with the new NIH scoring system earlier in the month, I've been trying to reflect on the good, bad and ugly. (It is still very early days on this but discussions are starting to emerge.)
I think my biggest surprise and resulting problem has to do with the fact that the actual study section discussion is supposed to follow the new critique format. The emphasis of each reviewer's presentation is supposed to start with the overall impact, move on to significance, then investigators...etc. The detailed discussion of the approach (so familiar to the prior way of doing business) is, let us say, heavily discouraged. So the rest of the panel, who for the most part haven't actually read the application being discussed, have no idea what the proposal is about. From chatting with other study section members who have been participating in these new review meetings I think that "frustrated" is the nicest way to describe the reaction.
Now, I am not saying that every grant received vigorous input from the entire panel of people. But at least in times past, I'd say a majority of the discussions got at least one substantive comment from another panel member, not directly assigned to the application. I'd hate to put a number on it but many applications in each and every round had a vigorous multi-reviewer discussion. So far it looks as if the participation of the other reviewers is sharply reduced. We completed our meeting earlier in the month in record time, for example.
Ideally, I would hope that reviewers will adjust to this new approach in the future by doing a little be more preparation in the week leading up to the meeting. A little more reading of the other proposals that appear likely to be discussed. And I do like the fact that the focus is supposed to be moved away from whether the minutia of the methodological approach satisfy the whims of a given reviewer. Nevertheless I suppose I didn't anticipate the degree to which not talking about the approach left the unassigned reviewers in the dark and feeling as if they could not supply meaningful input to the discussion.

22 responses so far

  • What was your impression about the attention paid to responsiveness to prior review in the case of resubmissions?

  • qaz says:

    Did "approach" find its way in to the discussion anyway? NRSA study sections have been discouraged from discussing the research plan (read approach) for some time now. (We've been told for years that "It's supposed to be about the training plan not the research plan.") Last year, a new SRA got strict about not allowing discussion of the research plan until someone came up with "The research plan has the following problems which indicates a training problem because the advisor has not helped the student write a good enough research plan..."

  • pinus says:

    man...I would have slacked off with my NRSA applications if I knew they weren't allowed to talk about the science!

  • DrugMonkey says:

    Response to prior review received much less air time during discussion. However revised apps dominated the top of the initial scores list as usual. The de-emphasis on approach helps with the former of course...

  • whimple says:

    We got back a similar report from our recent study section attendees that response to prior review received much less emphasis than before. Potentially this is anticipatory of the new bullet-point critiques not being useful or specific enough to either enable or merit much of a direct response.

  • The de-emphasis on approach helps with this of course...

    Helps some, hurts others.

  • DrugMonkey says:

    Well, the focus on responsivity itself rather than if the proposal was improved can use a little re-balancing. I think it is mostly upside to pit this way down the list of importance.

  • Well, the focus on responsivity itself rather than if the proposal was improved can use a little re-balancing. I think it is mostly upside to pit this way down the list of importance.

    This is UNFAIR DSCRIMINATION against those scientists who write TOTALLY FUCKING KICK-ASS INTRODUCTIONZ!!11!!!1!!11!

  • qaz says:

    There was a real advantage to the previous NIH emphasis on response to reviewers, which was that at NIH you got a sense of whether you could rewrite the grant and get it funded.* In contrast, at NSF, it's a new crapshoot every time. Given that running a successful lab requires continued funding, sudden (and random!) "3-years of funding!" awards aren't conducive to running a lab with 5-6 year graduate student cycles. I'm concerned that NIH is moving to the NSF model where each grant is truly independent with an independent committee and unrelated goal posts.**
    * Ok, sometimes it was a false sense, but it was right more often than not.
    ** Everyone complains that NIH's goal posts keep moving, but in the old system, they moved in the same direction, so you just had to make it another mile. The problem with the each-application-independent model is that the goal posts move sideways! So after running a mile, you have to turn around and run back.
    I'd happily cut all my funding in half if I could get it guaranteed at that level forever.

  • Lorax says:

    This new system is almost certainly in preparation for the new 12 page application format. When that hits, I believe the expectation will be that the study section members are at least familiar with most, if not all, of the grants that they are not assigned. I believe we are in the teenage years of awkward puberty.

  • BikeMonkey says:

    Lorax- this is already the "expectation"! It will still be nonsense with 12-pagers. The real goal is to decrease the functional discussion until everyone gets on board with the real plan. To eliminate the in-person study section meeting. I predict we will end up with a grant review process that looks like the manuscript review process.

  • BikeMonkey--would that be good, or bad? (Having only submitted manuscripts, not grants...)

  • BikeMonkey says:

    Bad. I think the discussion process has merit, meaning that it catches ranking errors in the immediate case. It also has the effect of making review more consistent over time, i.e., "fair". Nothing is perfect but having the panel shaped into more consistent behavior and having the reasons for that behavior discussed now and again is useful. It is an opportunity for new participants to hear a diversity of approaches to review and evaluation of proposals. Otherwise n00bs would be trained by what? The opinions of the one guy in their department who was on study section fifteen years ago?
    Study section discussion is also a chance for opinionated advocates to work for change in the system. There are many subtle and unsubtle ways that a reviewer is free to advance a more general case to the other reviewers, not to mention the POs and SROs.

  • qaz says:

    Making grant review more like manuscripts would be very bad. There is a very big difference between manuscripts and grants - namely that there are lots of journals and really only one grant-giving agency. (Yes, I know Science is "special", but you can still get your paper published elsewhere if Science rejects you. But if NIH rejects your R01, there really isn't another option. And yes, I know there's NSF and private places, but none of them are on the same scale as NIH.)

  • Lorax says:

    I completely disagree that the expectation is that everyone is familiar with 80-90 25 page proposals in a study section. At least my experience is that is not the case. I have served on a study section that was reviewing grants on viruses, bacteria, fungal, and protozoan infections, there is no way I can be able to make expert assessments on the experimental questions being addressed in most of these proposals.
    The system (supposedly) highlights the significance of the work. I have some reservations about this, but think it is potentially much stronger. And I am happy to see trivial critiques about well established methodologies being banished from the review.
    Is there any reason to believe NIH is moving to do away with in person study sections or is this simply a fear-mongering rumor? I sure havent heard anything along these lines.

  • DrugMonkey says:

    Lorax, FWIW, this first started with a two-round interval (maybe 2 year ago by now?) in which we were informed they were examining pre-discussion scores and final outcome to determine if there was significant change via the discussion process. If CSR ever reported the data I have not seen it. There have also been a number of "online asynchronous" (or some such nonsense) meetings in which some sort of online format is substituted for a real discussion.
    So there is some evidence but I'll leave it as an exercise for the reader to conclude where this is all headed. I don't think the ponderous ship of NIH/CSR moves all that rapidly, myself.

  • qaz says:

    There have also been a number of "online asynchronous" (or some such nonsense) meetings in which some sort of online format is substituted for a real discussion.

    I participated in one of these. It was a special panel with only a few big center grants. But it ended up working really well. There was a thread for each proposal and we just posted back and forth all day. (Kind of like this blog!) In the end, I felt there was a lot more discussion than I'd seen in study sections before. Especially, as compared to the other time I was on a special panel with only a few big center grants, but we had to review them in person. I have no idea how well this would work with an R01 or NRSA study section, where there are lots of unrelated proposals to review.

  • DrugMonkey says:

    There was a thread for each proposal and we just posted back and forth all day. (Kind of like this blog!)
    Umm, that is not the most rousing endorsement of online asynchronous grant review d00d.....

  • Umm, that is not the most rousing endorsement of online asynchronous grant review
    No, it would be like blogs but without trolls! Oh, wait.

  • Ben says:

    My standing study section met last week. I think the new scoring method may work but that there will be a lot of variation across committees in how grants are scored. This will eventually be OK because grants were scored but may be a problem this round, with much of the variation in scores representing how committees chose to interpret the scoring system.

  • I participated in one of these. It was a special panel with only a few big center grants.

  • Keevo Jeevo says:

    The fact of the matter is that they are re-jiggering the entire system because the numbers of grants and the availability of reviewers are headed in different directions. Shorter grants, then why not shorter turnaround times. New scoring, but every reviewer will try to re-interpret these to fit the "old" way of looking at things. No more A2's, we are nearing the point where new investigators will fall by the wayside and only the established will get any funding. What a ridiculous system.

Leave a Reply