Interesting slides from the CSR on NIH grant review

Comrade PhysioProf alerted me to a couple of powerpoint presentations (converted to pdf in the links) of possible interest to the NIH grant geeks in the audience.

First one seems to be from Toni Scarpa to the CSR Advisory Council on May 2.
Lots of interesting data on each slide but I'll pick out a few things I noticed.

-Slide 18, the number of PIs submitting grants and the number of applications per PI over the past decade. I've always thought the NIH paid too much attention to per-application success rate and not enough to per-PI success rate. Nice to see. 1.3 grants per PI to 1.6 Research Project grants per PI submitted each year is the range. Shows how skewed some of our experiences are but fits with the data on the Rockey blog about how most PIs only carry 1 or 2 grants.

-Slides 21,22 show a faster relative increase in the number of R21s submitted over the past 10 years compared with R01s. I'm sure we all know the reasons but interesting to see.

-Slide 23 (and Slide 5) testifies to Scarpa's crusade to get more grants reviewed with fewer reviewers. I happen to disagree with this (I saw a little bit of this trend during my term of service) but no doubt the cost savings are tremendous.

-Slide 25 continues the cost-savings theme. In particular it is interesting to think about the savings associated with having more online, electronic reviews versus decreasing the number of actual reviewers. I'm not a big fan of the online, asynchronous review but then I'm not a fan of losing specific expertise either. These are not easy tradeoffs to make, clearly. Slide 41 seems to indicate the cost per application might be cut to a fifth by using online instead of face-to-face review mechanisms.

-Slide 33 has an interesting note about 1.7 million views, presumably for the study section description he was using as an example? That would be an interesting way to track the relative interest/load/etc for a given study section.

-Slide 35 shows trends of turning the A1 back in at the very next study section round but it would be a lot more useful as a percentage of all A1s that were put in...

-Slide 40 testifying on how reviewers like the online review methods is nice but is sure as hell needs a lot more context. Should be broken down by those who have served (or are willing to serve) on regular study section panels versus those that are not. And to delve into the reasons for being happy with online formats, etc. Not to mention do some additional queries on how the reviewers feel the outcome is for applications. For example, there are definitely times when I would say "no way, no how" to a study section that required me to visit Bethesda but might take on a phone or online review duty. That gets my expertise (such as it is) engaged where it otherwise would not have been involved. However, I also think applicants are not well served by the online review, all else equal.

-Slide 27 reiterates the accusation that it is a common complaint that there are not enough senior/experience reviewers. I'd still like to see some expansion on who is making this complaint, on what basis and how it is verified in fact. In contrast, the complaints about speed, burden on reviewers and favoring predictable research over innovation seem a lot more based on things that can be quantified and reasonably well described.

-Slide 44 continues this theme because of the heading "Recruiting the best reviewers" on a slide which reports the number of Full, Associate and Assistant rank reviewers over 98-08. You can just see the start of the great Scarpa purge of Asst Profs ( I do wonder why this slide is not updated to 2010). Again, no apparent explanation as to their justification for conflating "best" with seniority. Slide 45 has ways in which they have been convincing more reviewers to serve but again is pretty light on showing where this means they get more of the "best" reviewers. Somehow I feel confident Scarpa didn't really expand on this in his presentation....

Slide 54- I really like the percentage of New Investigators tracked since 1962! w00t! all of their trends should go back that far.

Okay, that's enough for you to chew over for now....

18 responses so far

  • anon says:

    Interesting. What's the second link? you said "a couple of powerpoint presentations.."

    I agree with your thoughts on slide #27. What are the criteria for selecting reviewers other than narrow-mindedness?

  • drugmonkey says:

    I wore down after going through the first set of slides, anon. maybe I'll get to that for another post....

    of course that is not at all what I said about selection of reviewers but if you want to see the formal approach, it is here:

    http://cms.csr.nih.gov/PeerReviewMeetings/StudySectionReviewers/HowScientistsareSelected+orStudySectionService.htm

  • iGrrrl says:

    All interesting, and thanks for posting. You note that you don't like the discontinuous on-line reviews. Have you ever participated in a chat-room type? It's electronic, but simultaneous.

  • mat says:

    Rumor has it that CSR is now going to encourage rotating Assistant Profs through study sections even before they have a grant. Did Scarpa change his mind?

  • Neuro-conservative says:

    Slide 41 was most interesting to me. If phone reviews are one-quarter to one-third the cost of internet assisted review, then I cannot see any reason for the latter.

    I have participated in several internet panels, and they simply fail to generate the level of discussion that is achieved in even the most rushed and overloaded phone reviews. If the goal is to zero in on the key strengths and weaknesses, and weigh them for overall impact, there is no substitute to having several people paying attention to the same thing at the same time.

    I would love to hear someone explain the supposed advantages of the internet review. As far as I am concerned, they don't even save time, what with all the flipping back and forth and hitting refresh.

    The only "advantage" I have observed is that reviewers who want to make zero commitment to the process have an easier time hiding and avoiding participation.

  • DrugMonkey says:

    N-c, I tend to agree with you. I suspect the value is 1) cost; 2) it sounds hitech and 3) it gets more reviewers to "participate", on the face of it.

    If you will recall, Scarpa was at one stage pushing towards the idea that discussion could be done away with all the way, just rely on the manuscript review method. If one credited him with playing the long game, the Internet forums and new review style are all pushing toward demonstrating that discussion doesn't move scores. The more he can quell actual discussion, the more he can prove it unnecessary.

  • Neuro-conservative says:

    Interesting theory, DM. This sounds somewhat familiar -- didn't you post some Scarpa slides to that effect somewhere on your many blogs?

  • becca says:

    If discussion doesn't (generally) move scores, but *does* increase people's perception that the process is fair*, is the money well spent?
    I kind of take it for granted that anything NIH spends money on these days will be better in line with how I would spend it than 80-90% of the rest of my tax dollars, so I don't sweat the problems. But *my* job/ass is not currently on the line, so I don't care overmuch about the Absolute Fairness of the Process... I wouldn't fault those who do.

    *this, of course, would also have to be tested emperically

  • whimple says:

    If discussion doesn't (generally) move scores, but *does* increase people's perception that the process is fair*, is the money well spent?

    The short answer is 'no', because whether or not the applicants perceive the process as "fair" is completely irrelevant to the mission of the NIH.

  • DrugMonkey says:

    I disagree whimple. If it gets bad enough then CongressCritters start meddling. And that isn't good for the mission.

  • Guess says:

    "The short answer is 'no', because whether or not the applicants perceive the process as "fair" is completely irrelevant to the mission of the NIH".

    I disagree with the statement, unless explained better. Moving scores or not (at times) is the consequence of putting an application (by discussing it) in a consensual context of scientific merit and potential benefit to society and science itself. It seems relevant to the NIH mission (to me) and it should help mitigating "subjectivities" in reviewing.

    Just my two cents. From somebody "hors de champs"

  • igrrrl says:

    When I ran focus groups of reviewers last summer, one comment that jumped out was that the longer an application was discussed, the worse its score got. This may have been true for a whille, but my goal in the focus groups was to find out how the 12-page R01 had or hadn't changed how they reviewed the grants, or how the study section ran.

    At the regional meeting in Ft. Lauderdale, I talked with one of the I/C review officers, and he brought up the chat room model as one he'd tried where he liked the results. He didn't seem to think that the straight mail review, manuscript style, or the discontinuous blog-style review was as effective. A live chat room, he thought, allowed the quiet thoughtful people to speak without constantly being crowded out by those who must comment on everything, and the give and take you want in a real-life meeting. It had the advantage of real-time.

    On the fairness note, I think NIH wants the best science funded (that's what they keep saying), with the caveat that the institutes and centers have programmatic priorities. By that view, they'd want peer review to both be and seem as fair as humanly possible.

  • Meanwhile, as you guys struggle to get funded, NIH is required to spend billions of government funds on a single cohort study that appears to have lost its way...
    http://tinyurl.com/6jsdvze

  • Grumble says:

    "1.3 grants per PI to 1.6 Research Project grants per PI submitted each year is the range. Shows how skewed some of our experiences are but fits with the data on the Rockey blog about how most PIs only carry 1 or 2 grants."

    So let me get this straight. 1.3 to 1.6 submissions per year indicates that PIs submit a grant in roughly half of the three council rounds per year, with the end result that they have 1 to 2 grants at any one time. In other words, PIs have to be in a state of grant-writing roughly half the time just to maintain enough funds to run a reasonably sized lab.

    Of course, one can do other things while writing grants, so it's probably an exaggeration to say that PIs spend 50% of all their work time on grant writing. Nevertheless, that percentage is significant, and certainly approaches or exceeds 50% among those who are just starting out. How many billions of dollars is being spent (by the NIH and by colleges and universities) to pay scientists to ask for money, rather than to do actual science?

    This system strikes me as incredibly inefficient.

  • drugmonkey says:

    Not to mention it is technically illegal to work on grant applications while being paid from federal funds, Grumble. Good thing we work for free after our first 40 hrs, eh?

  • drugmonkey says:

    To be slightly more constructive, I return to my solution for the churning problem. NIH should find a way to roll just-missed proposals forward for pickups in the next round. It would take a few rounds but this would drop the number of apps significantly

  • Grumble says:

    "NIH should find a way to roll just-missed proposals forward for pickups in the next round."

    Doesn't NIH already do this? A NIDA PO told me that applications are considered for 1 year (3 rounds) after the round in which they were evaluated. I know of cases where grants with marginal scores (nowadays, between 10th and 20th percentile, at least at NIDA) were funded *after* the 1 year had expired.

  • drugmonkey says:

    Not on the scale I am suggesting.

Leave a Reply


× 4 = eight