New Option for NIH / CSR Study Section Service

Jan 13 2009 Published by under Grant Review, NIH

The January 2009 edition of the CSR Peer Review Notes is up. It covers many of the upcoming changes to grant review which we've noted here on the blog. These will start with the May/June rounds of review.
One bit I hadn't noticed before will be the provision of two options to serve as an appointed member of a study section.

Members of chartered study sections appointed after Jan. 1, 2009, now can choose four- or six-year membership terms. The new six-year terms allow them to spread their participation over more time to make it easier for them to juggle busy schedules. These reviewers only participate in two meetings a year instead of three, as current members do on four-year terms. We note that it is not possible for current members to convert to a six-year term. NIH sought this change after data collected for the Enhancing NIH Peer Review initiative suggested that reviewers would be more interested in serving on peer review groups if they could serve less frequently, over a longer period of time.

As a reminder, the current four-year term of service means that you commit to review grants three times per year with the panel reviews scheduled in Oct, Feb and June (plus/minus a few weeks). Of course people can and do beg off now and again with various excuses, they can't actually force you to participate. For the most part, my experience has been that people make at least 9 or 10 of their 12 meeting commitment.
The new option to go only twice per year is good because there is no doubt that three times per year without break is a bit of a burden. Even this, however, has implications for applicants. And in the present changing environment the implications are unclear.
In the past, revised grant applications could only be turned around every other cycle. Under that prior circumstance a change to two meetings per year would throw off the degree to which a given reviewer would continue to see revised versions over time. An every-other-round schedule would work out to less than two per year, for example. This would inevitably have the effect of breaking up reviewer continuity*.
In the present time, SROs are being pressed to get those summary statements back in a hurry, such that applicants can put in a review in the very next round. What this will do is to increase the variability in the revision schedule as some applicants get their revised application back in one round and some get theirs back two rounds after initial review. If the reviewers are there every round, the chances of getting a return visit from the reviewer is still pretty good. Empaneled reviewers who skip a round each year will thereby throw more variance into the assignments.
The final unknown is the effect of limiting applicants to a single revision of a new grant. If this works in the spirit in which it is intended, it may lighten the bias for revised applications. Thereby diminishing the benefits of keeping the same reviewer on the application revisions. I am betting that this will not work and that we will see "new" applications that are really revisions of prior A1 applications. Consequently, the benefits of reviewer continuity are still a factor.
The six-year term of service though? What's the point? This I don't get at all. After all, it isn't like this is a punishment. There is nothing magic about a 12-round commitment. Why not just ask those who are willing to come back for another four years if you think they are so great?
* which you may view as a good thing. I do not, from either the lazy reviewer or the applicant perspective.

15 responses so far

  • drdrA says:

    I apply to a non-NIH-federal agency that has serious issues with reviewer continuity already, and it is hell.
    I'm not looking forward to this.

  • BikeMonkey says:

    Three years should be the limit. Whether you review two or three rounds. It just gets tiring. The longer reviewers serve, the more likely they are to dash off shorter and shorter critiques for the bad pile, reserving all their effort for the ones that will be discussed. Also, they start getting cynical. Just sayin'

  • Lorax says:

    Obviously there are major differences in different study sections, but the 2 I most often apply to and serve on (as an ad hoc), have a significant number of ad hocs in every cycle many of whom are competent (yes, I just bit my tongue very hard) to review my proposals. So I tend to think that the people who have reviewed my grants from cycle-to-cycle vary dramatically and thus little to no continuity to begin with (the summary statements and critiques tend to support this idea).

  • I am betting that this will not work and that we will see "new" applications that are really revisions of prior A1 applications.
    You're damn fucking right!

  • Lorax says:

    I am betting that this will not work and that we will see "new" applications that are really revisions of prior A1 applications.
    What CPP said, but also this is already often the case with new grants being recycled A2s, which is not necessarily a bad thing my A2 was an unfunded 18th percentile and my consequent "new" grant was a 4th percentile!

  • DrugMonkey says:

    but the 2 I most often apply to and serve on (as an ad hoc), have a significant number of ad hocs in every cycle
    Have you noticed any recent changes? I'm hearing rumours of decreases in the number of ad hoc reviewers and increases in the load for the appointed members. This is consistent with some noises from the CSR over the past year about grant loads and I wonder if they are starting to get serious...?

  • Lorax says:

    Its too early to tell in my opinion. I last served a within the last year and had 11 grants to review (6 as primary), that seemed comparable to the permanent members in that cycle. Ill give another year (next Oct/Nov study section) before I put out a verdict on any changes.

  • qaz says:

    I second drdrA #1. I have reviewed and applied for both NIH (with reviewer continuity) and NSF (without). NSF ends up having a moving target, which makes the application a real crapshoot. And, as drdrA says, truly hell.

  • Neuro-conservative says:

    My understanding is that NIH is going to look very carefully at new apps and compare them directly to previously rejected A1's by a given PI. Has anyone else heard anything about enforcement mechanisms for this new provision?

  • Dave says:

    My experience is it's a crapshoot at both NIH and NSF with regard to repeat reviewers. And really, the lack of repeat reviewers might be a good thing if it dampens the tendency for reviewers to feel obligated to give good scores to proposals just because they've improved substantially since last submission (but still suck compared to first submissions). A proposal sucks or it doesn't. History shouldn't matter.

  • Cashmoney says:

    I'll counter Dave. A proposal is good or it isn't, budgets shouldn't matter.
    (oh wait, this is reality.)

  • Dave says:

    Cashmoney: Although not presently allowed, I totally think 'past productivity per dollar' should be explicitly discussed in reviews, and potential 'bang for the buck' should be an explicit review criterion. It gets to the heart of the matter way more than vague measures of 'innovativeness' or whateverthehell NIH is trying to pretend they are trying to reward.
    Of course, I am talking about the standard extramural research grants here, wherein the NIH review system rewards mostly teeny eensy weensy incremental additions to well-formulated models. Is there an amino acid in synaptotagmin that hasn't been mutated yet? Someone better get on it! That tyrosine could be the key to all brain function!
    Very different from this standard crap are the back-door influences, wherein Dr. FancyIvyPants convinces NIH officers that his latest superduper genomic scale hypepot is the next greatest thing, in which case an RFA is summarily issued to fund exactly that. This sort of crap has nothing to do with science, and is all about salesmanship and politics.
    I don't know what fraction of the total NIH budget is spent on these two categories of crap (eensy peensy incremental crap versus snakeoil pipedream). I'd guess 97%. Luckily, NIH has a $30 billion budget, which means that 3% is still several hundred million dollars spent on worthwhile science.

  • drdrA says:

    qaz #8- At least at NSF you can suggest reviewers (and people who review and submit there tell me that these suggestions get listened too sometimes)... My experience is not with NSF- and you can't suggest reviewers at this other agency. The reviewer situation is quite abysmal.

  • qaz says:

    Dave #10 --
    I don't know whether a proposal sucks or it doesn't. I would say the science sucks or it doesn't. The proposal is a way of convincing the powers-that-be that good science is going to happen. (Remember, the proposal is only a means to an end. Science is the actual goal. I hope we all agree on that. Only university presidents and deans and I guess CFOs if we really have them feel otherwise.)
    Nevertheless, the fact is that all proposals have sufficient flaws to be rejected. (If they didn't, they'd be published papers instead.) The question is whether the flaws in the proposal match up with the problems that the reviewers care about. Changing reviewers leads to changing goal posts. Retaining reviewers means that one can address the specific flaws they cared about last time.
    Really, it's an issue about transparency. Retaining reviewers means that one can get some sense what one's chances are. (Yes, DM, before you yell at me, I know that scores don't have to improve. But, in practice, they generally do.)
    And drdrA #13, wow. That is definitely rough.

  • Pinko Punko says:

    Neuro-con is right.
    They are directly comparing rejected A1s and new submissions from the same PI and rejecting out of hand barely revised proposals. Of course this is a load, because for many proposals, all it takes to look like a slam dunk is just doing the experiments and having an aim or two of "preliminary data." The bar is so high for grants now, it is arbitrary, and appropriate and sound experiments are getting taken off the table when good aims aren't funded because they can't be submitted again with more data, which is what the holding pattern of an A2 might have gotten you. It is all so stupid.

Leave a Reply