Disingenuous or Underinformed? NIAID on the R21

Jun 07 2012 Published by under NIH, NIH Careerism, NIH funding, Peer Review

Honestly I never know. NiAID has a feature article on the R21 posted.

They claim doe-eyed failure to understand why most PIs include preliminary data even as they show only one of their funded R21s failed to include data.

Maybe my head will stop shaking in disbelief by tomorrow. Maybe.

47 responses so far

  • drugmonkey says:

    Ps. As a reminder, Program Officers listen to the discussion of grants in the peer review meetings. Often across different sections that might be reviewing the grants that fall into their sphere of interest. They do their job across intervals of time far longer than the typical four-year stint for an empaneled reviewer.

  • Beaker says:

    What's particularly annoying is that they place the "blame" for including preliminary data on the investigators--as if study section behavior had no impact on the decision to include preliminary data. This is typical, as it is difficult to find any public statements where the programmatic part of the NIH criticizes the CSR part.

  • VIrgil says:

    Well, think yourself lucky if you do work under the remit of NIAID... at least you still HAVE R21s!

    NHLBI dumped the R21 a while back, claiming it was not working as planned (too many juinor investigators treating it like a "mini RO1" as a segue way to a career). Now all those folks have to sue other mechanisms (K) to kickstart their careers and get out of post-doc-dome.

  • drugmonkey says:

    NHLBI has always been a Mechanism Maverick, IIRC. But the current "Kill the R21" thrust is coming from many places, including OER head Sally Rockey.

  • physioprof says:

    Not that this sort of thing could ever possibly happen, but if one were being unrealistically cynical, one might speculate wildly that these statements--as well as the choices made for the example R21s they have posted: four mini-R01s and two big-swinging-dicke reacharounds--are intended as messages from the lower level program staff to both the highest echelons of NIAID management as well as to potential R21 applicants. But thatte's just unsupportable crazy talk, so never mind.

  • Jeremy Berg says:

    NIGMS discontinued its participation in the parent R21 program when I was director. From my perspective, the major problem was that reviewers comments and scores did not align well with the program goals. NIGMS frequently saw R21 reviews with good scores and comments such as 'this will almost certainly work based on the impressive preliminary data' and R21 reviews with poor scores and comments such as 'this could be a real breakthrough for the field but it is high-risk and may not work'. NIGMS tended to many R21 grants "out of order" to favor those applications that met the programmatic objectives. Later, NIGMS led the formation of the EUREKA R01 program, explicitly supporting higher risk-potentially high impact research that had not yet been accomplished (and reviewed by the NIGMS Office of Scientific Review). In an ideal world, educating reviewers on CSR standing study sections about the purpose of the R21 was an alternative but this proved ineffective. NIH also started using the R21 for many other purposes so that reviewers had every reason to be confused.

    On another note, this material seems to be dated since it states that "Altogether, four institutes (NCI, NHLBI, NIDDK, and NIGMS) and three centers (NCMHD, NCRR, and FIC) do not support R21s through the parent PA (they do use R21s for their own opportunities)." Of course, NCMHD is now an institute (NIMHD) and NCRR is no longer in existence.

  • physioprof says:

    Jeremy, you really have to start a new blogge.

    https://en.wordpress.com/signup/

  • Joe says:

    You'd have to be a fool not to include prelim data in your R21 application if you have the data. Particularly if you are new or new to the field. They are more likely to trust you with the money if your risky scheme looks likely to work.

    Do reviewers really judge R21's on whether they are likely to lead to an R01? The first time I wrote an R21, one of the comments I got on the first submission was along the lines of "unclear how these studies will lead to an R01". Ever since then I always include a sentence or paragraph indicating that if we find X, these studies will lead to a larger project that could include X, Y, and Z. That works in the application, but I don't really think of R21's that way. I just use them to get a couple of years of money for an offshoot of a project that couldn't be its own R01.

    As for new investigators, NIAID seems to think that reviewers may be seeing the R21 as a new version of the R29, a small award for early investigators. "I'm not sure Jimmy is ready for an R01 just yet. Let's see what he can do with an R21 first."

  • Dr Becca says:

    NIGMS tended to many R21 grants "out of order" to favor those applications that met the programmatic objectives.

    Very interesting, since I'm pretty sure this is exactly what happened with my grant that just got picked up (NIMH).

    For the record, I didn't include prelim data in mine, though I did include some "random" unpublished data that was tangentially related to demonstrate that I was capable of the methods.

  • physioprof says:

    For the record, I didn't include prelim data in mine, though I did include some "random" unpublished data that was tangentially related to demonstrate that I was capable of the methods.

    That absolutely *is* "preliminary data", defined as "primary data relevant to the likelihood of successful pursuit of the proposed specific aims".

  • Pinko Punko says:

    I really wish that R21s were more generally supported. They are seed money for new projects that could be very worthwhile for investigators that are in between funding (not saying the this should be specific for this purpose) but the lack of preliminary data is what kills many interesting and possibly significant ideas. I really wish NIGMS had them. Why waste 5 years on something risky when 2-3 years might be enough to evaluate possible avenues for success?

  • Arlenna says:

    Dude, Jeremy, my Eureka's major negative comments were "this would be really high reward but looks high risk." like, whuh? The PO even said "well... Yeah...."

  • phagenista says:

    I didn't read the NIAID it as doe-eyed failure as much as a blaring warning that "REVIEWERS EXPECT PRELIMINARY DATA." Doe-eyed passing the buck.

    Congrats on the funding Dr. Becca!

  • whimple says:

    I interpreted the Eureka mechanism as a way to give more money to those that already had a lot of it.

  • drugmonkey says:

    I would really, really like to know what was tried on the "educate reviewers" front. As I've said the only time we wasted much time in meta-review on the section I was on was in discussing the purpose of R21. SROs and (once) POs stuck to the "we're not telling you how to view it" line. Very frustrating.

  • qaz says:

    R21s need preliminary data because ideas are a dime a dozen. Plausible ideas are rarer. To prove an idea plausible requires preliminary data.

    But in my experience, the preliminary data in an R21 and an R01 are very different. The preliminary data in an R21 are "this is feasible" - we can do the technique, the basic behavior occurs - but the preliminary data in an R01 is "this project will likely give us an important answer" - a couple of subjects shows an interesting trend, data collected from another experiment not perfectly designed for this project shows something intriguing, etc. The R21 data is about feasibility. I had preliminary data in my R03. (The method works - nothing about how the data would come out.)

    All preliminary data is not the same.

    Note - this is what Dr. Becca said she did (and got funded for it). In my opinion, that's what the R21 *should* have. In the study sections I've seen, there is a clear belief that R21s should be smaller and more exploratory than R01s. R21s are often used for a lab in one field to move over into another. In my field, R21s work very well.

  • becca says:

    DM- I get where you're coming from. I think it's pointless to have an R21 for 'high risk' research and then have them reviewed like this. But. Dude. Did you ever think about what an SRO's job is like, and how that job could be made even *less* pleasant by having to 'educate reviewers'??

    qaz- NIAID's data is pretty clear, the R21 is a GREAT bet for an established PI to team up with another established PI and branch out into a new area.

    It would be neat if NIAID could break down the preliminary data into 'feasibility demonstration' vs. 'tantalizing evidence that must be further substantiated prior to publication'.

  • drugmonkey says:

    Becca-

    SROs advise the panel "we do not make decisions about funding" about five times per hour. They can add "R21s are not judged on Preliminary Data" to their list of stock guidance phrases.

  • Jeremy Berg says:

    Arlenna: At least for NIGMS (other ICs also support EUREKAs and do their own reviews), I and other program officers personally oriented reviewers regarding the purpose of the program. Even with this orientation, comments such as the one that you received still appeared; it is hard to break old habits and mindsets.

    Whimple: Providing more funds to those already well funded was absolutely NOT the goal of the EUREKA program although the funded investigators were a mix of individuals at various career stages and levels of funding. See http://www.nigms.nih.gov/Research/Mechanisms/EUREKAGrants.htm for a list of awardees through 2010.

    Drugmonkey: Because R21s for NIGMS are reviewed in a wide range of study sections, our efforts consisted of meeting with division directors in CSR to share our concerns and encouraging our program directors to speak up at review meetings when R21s were being discussed regarding the programmatic goals. We did (apparently) make some progress with some study sections but the problems remained. The separation of program and review at NIH does make such education efforts difficult to implement in a uniform or thorough manner.

  • Drugmonkey says:

    The majority of my funded grants had "feasibility" prelim data but zero to very, very minimal prelim specific to the scientific aims. That includes R01 and other mechs....

  • NeuroGuy says:

    The system is broken, and I'm happy to see someone from the NIH admitting as much on this very thread. R21s are supposed to be "high-risk" and "high-reward" and preliminary data is "not required", according to the NIH's own guidelines. However, reviewers in study sections do whatever the hell they want, and what they don't want are transformative paradigms threatening to upset the apple cart of their multiple-times renewed same-old same-old R01s and those of their buddies. Compliant SROs who don't slap the hands of those reviewers don't help, but of course reviewers would then simply resort to the tried-and-true StockCritiques about how the proposal was "highly ambitious", etc.

    How about a modest proposal: all R21s to be reviewed in a separate study section by reviewers without a current R01. I was actually on such a study section years ago (only a few of the panel actually did have an R01). One such reviewer, a superstar with millions in NIH funding stated that this study section was the best he had ever been on.

  • becca says:

    I don't see this as "the system is broken" so much as "PIs* are stubborn as mules". Which, everybody who can remember their thesis committee meetings already knew.

    *I'm sure trainees are worse (albeit about different things).

  • Jeremy Berg says:

    One of the lessons that I took from years of watching many different programs intended to foster "high risk-high reward" research is that high risk research is...wait for it...risky. It is very difficult to get reviewers to favor application that are potentially very important but might not work (either for technical or conceptual reasons) over applications describing interesting science that almost certainly will work.

    Even in programs such as EUREKA or the NIH Director's Pioneer or New Innovator programs that are reviewed by separate review groups with reviewers very specifically oriented about the goals of the programs prior to receiving the proposals and writing their reviews, the same issues came up with some frequency. As Becca notes, scientists are stubborn (and set in their ways). Based on my experience, I do believe that separate review groups is likely to be the best path forward, but for applications that span the entire range of NIH-supported science, it difficult to have reviewers who are well matched in expertise with the applications. How important this is for review (particularly of high risk research) is a topic for another discussion.

  • DrLizzyMoore says:

    I spent some quality time yesterday with the NIAID newsletter and their sample R21's. My time spent on RePORTER yesterday-was time much better spent.

    CPP is right about NIAID's sample R21's. There was one that had 4 (4!!) Specific Aims. That's ridiculous for a 2 year grant--or not, considering it got funded, but whatevs. Common phrase seen in the summary statements: 'the PI has a well established track record in this field'. Bully for them. For a n00b none of this information is helpful nor particularly informative.

    On NIH RePORTER, I scavenged my field for R21 grants. There were not many-but they were spread across n00bs and established PI's alike. The 'weight' of the projects were similar to weight of the one that I'm contemplating. I also noted which study sections these proposals went to, double checked the rosters with the CSR website and HA! Friendly. Faces.

    My R21 will have preliminary data. My proposal is addressing a new area in the field and in my lab, so I need to show that the basic tenant of the idea/hypothesis is testable AND, more, importantly, that we can do this shitte in my lab. Of note, neither of the Aims (there will be two) would be included in an R01. Respect the mechanism, yo. The goal is to submit the R21 (after maternity leave) in October and the R01 in Feb.....

  • NeuroGuy says:

    Yes, part of the problem here is that some are "stubborn as mules". But there is also a built-in conflict of interest inherent in the system, some PIs/reviewers are intellectually dishonest, and they can get away with it because their reviews are anonymous. There is a conflict of interest between determining "the best science" (as objectively as that can be determined), and "the science that best makes my own research seem important and increases the likelihood of renewing my R01".

  • Drugmonkey says:

    Evidence for that bold assertion, NeuroGuy? Sat on many panels have you?

    JB- CSR seems fond of piloting new approaches to review. They should pilot taking a known quantity reviewer who gets "innovation" and is convincing and charismatic (I nominate PP). Make this person chair and work with SROs to construct a panel of like-minded* reviewers.

    *SROs worth their salt *know* how their empaneled reviewers behave. The ones with long service have seen tons of reviewers- they could make a good stab at it by themselves....

  • NeuroGuy says:

    @JeremyBerg:

    Did it ever occur to you that maybe the problem is... wait for it... the reviewers themselves? If they're so risk-averse they've never placed a bet at a casino maybe they shouldn't be reviewing R21s.

    Interesting science that almost certainly will work... should be funded by R01s. Potentially paradigm-changing innovations that might not work... should be funded by R21s. Indeed at the study section I was on, more than once the SRO said of an application "this should be an R01".

    Oh, and BTW, there was no difficulty whatsoever in finding reviewers well matched in expertise with the applications, if "well matched in expertise" does not mean the same thing as "having a current R01".

  • NeuroGuy says:

    @DrugMonkey:

    Oh come on. In the immortal words of Judge Judy, don't pee on my leg and tell me it's raining. You really believe "the best science" always correlates with "that which makes my R01 renewal most likely", or that PIs are oh-so-ethical it's never been necessary to have them disclose conflict of interest for, say, money they've been getting from Big Pharma, and that has never influenced the reported results of clinical trials. Oh wait...

  • Drugmonkey says:

    Things that have happened, ever, are not the same as things that always obtain.

  • Jeremy Berg says:

    Drugmonkey: This is essentially what we did with the EUREKA program. This was done by NIGMS but it could have been done by CSR. The NIGMS review staff found a chair and selected a panel of reviewers who were known to be not risk averse. My sense is that the reviews fit the programmatic objectives reasonably well in that many proposals that were solid but particularly risky or innovative did not receive good scores whereas a few proposals that were pretty fair out there but were plausible did well.

    Neuroguy: I agree with you that solid, interesting science should be funded by R01s rather than R21s. The reviewers are certainly part of the problem although this is made worse by reviewing R01s and R21s in the same study section and have to "change gears" about review criteria. My comment about having reviewers well matched in expertise related to the fact NIGMS received a hundred or more EUREKA applications that spanned the very broad research areas covered by NIGMS. Finding three qualified reviewers for each application with this much breadth is tough without having a huge review group. It does not matter whether the reviewer pool is limited to NIH funded investigators.

  • Drugmonkey says:

    Oi, JB, you're killing my faint hope.....

  • Physician Scientist says:

    The chair of the Transformative R01 Director's award received over $1,000,000 from GlaxoSmithKline in the last 5 years!

  • Physician Scientist says:

    sorry-that wasn't clear...The chair of the study section for this years (2012) Transformative R01 Director's award received over $1,000,000 in salary from sitting on the board of GSK.

  • physioprof says:

    There is a conflict of interest between determining "the best science" (as objectively as that can be determined), and "the science that best makes my own research seem important and increases the likelihood of renewing my R01".

    You are saying that you think that reviewers on study section purposely tank exciting proposals so that their own pedestrian boring shit doesn't look bad in comparison when it gets reviewed at some point in the future in some different study section (or SEP)? Yeah, no way have you *ever* sat on a study section. Or if you did, you were completely oblivious to what was going on around you.

  • NeuroGuy says:

    @physioprof:

    Of course such a thing never happens. I mean, PIs who are often called out for obnoxious behavior on many other blogs, and who are often sexist asshats to boot, and often have their hands in the cookie jar of Big Pharma, all of a sudden become a model of personal integrity once the doors to the study section meeting room close, where nothing they say inside of it will come out. And it never happens, because you say so. You of course have been inside every study section that has ever met in the NIH, so we can take what you say to the bank.

    And of course such proposals that get tanked are never "exciting". They are always "highly ambitious" (since the reviewers couldn't in a million years begin to do what the PI is proposing), with concerns about "lack of prior productivity" and "lack of expertise" (since the PI hasn't published any papers supporting the status quo, even though he's published a gazillion showing his grasp of the methodology, and [gasp] forgot XYZ's paper in the reference list). If there aren't preliminary data supporting the hypothesis, the approach is derided as a "fishing expedition"; whereas, if there are, the hypothesis is "already shown by the preliminary data".

  • anon says:

    @physioprof

    I am not as experienced as you and DM but i think that Neuroguy is making a very good and real point. Scarpa tried to remedy major problems in the NIH peer review and he succeeded in some of them. He didn't have time to bring to fruition some of the most pervasive ones due to internal resistance and corrupt politics. I assume that you're lucky enough to not have encounter any of those situations in your funding life. Other people have!.

  • Drugmonkey says:

    The question, anon, is whether on average the day is ruled by craven behavior or by professionalism. I argue that it is for the most part ruled by people trying to do a hard job as best they can.

    The disappointed applicant is simply not as credible in making this estimation as is someone who has been both disappointed applicant and also a reviewer.

    Scarpa? Please. His biggest agenda seemed to be searching for ways to save money on review. Admirable but I don't think he put through a lot of positive fixes for the issues at hand. Are you aware he was the one to purge Assistant Professors from panels and to seek out academic society recommendations for more reviewers? To push for less discussion and more reliance on initial scores?

  • dsks says:

    [ hypercaffeinated rant ]

    I think the review process across the board might be rendered a little less confusing if the NIH just admitted what is already understood by most of its own and the rest of the scientific community as the reality and set up the award structure to reflect said reality. For example have a set of awards cordoned off in a category described as serving, "The best innovative, risky research with the potential for a significant leap forward in our understanding/technical ability" with its own review and scoring criteria, and a second much larger award category described as serving "The best plodding, pedestrian research with the absolute certainty of a significant shuffle forward in the field".

    There's no sense in continuing this stigmatization of the latter in preference for the former when this perspective hasn't changed the fact that 95% of grants submitted in all categories are probably of the latter variety, and importantly about 95% of those that are actually funded probably are, too. (Unless, of course, we're so confident of the unimportance of workmanlike science that we must concede that the right wing politicians are right and that the NIH is currently wasting 95% of the tax payers dollars it is allocated on science that doesn't need doing.).

    A two tier system as proposed would at least let everybody knows where they are and what they're supposed to be doing. PIs won't have to labour on crafting bullshit appeals to innovation that reviewers don't believe anyway. Reviewers will no longer feel conflicted about what to do with the many grants that propose tackling important scientific questions using conventional - nay, sometimes antiquated - and yet perfectly functional approaches*. Note, nothing will change in terms of how things are actually done, but at least everybody would be a lot less apprehensive, which is, in and of itself, a positive step towards the health and welfare of the citizenry.

    * why do molecular biologists still insist on proposing to tackle structure/function questions using some variant of the near 20 yr old substituted cysteine accessibility method? Because it fucking works.

    [ / h r ]

  • NeuroGuy says:

    @DrugMonkey:

    Again, I ask what makes you so confident that the "craven behavior" that in other areas is often discussed in this blog and elsewhere (e.g. stealing ideas from posters, delaying manuscripts in review so that another manuscript gets published first, taking credit for a post-doc's ideas and work, etc.) all of a sudden ceases when a group of people sit down at a study section. And yes, I'm both a disappointed applicant and have been a reviewer, so I don't see why you are more credible than me. PIs can be egotistical asshats.

    I'm just curious: do you deny there is a conflict of interest in the following situation. Big-name PI has been pursuing XYZ technique for years (which is what most others in the field are using), and his R01 is funded to do that. Dr. Young Gunz proposes the ABC technique, which if successful will be much cheaper and better than XYZ. Surprise! The grant doesn't get scored and the reviews come back saying the project is "highly ambitious", the investigator has "lack of productivity" since she hasn't published on the XYZ technique, and the application gets dinged for lack of preliminary data. If you believe that the NIH cares about or has a good way of handling this type of conflict of interest, then I've got a great deal on a used car you may want to have a look at.

  • anon says:

    “I argue that it is for the most part ruled by people trying to do a hard job as best they can”

    Yes. I would agree with that and it doesn’t hurt to continue to hear the voices of those who feel, rightly or self-misguided, that they were not evaluated professionally. Perfect professionalism hasn’t come on earth yet and, in the meantime, we can always learn from criticism whether we like it or not, agree or disagree with it.

    I doubt that Scarpa purged Assistant Professors from review panels for any other reason than his conviction that it is in their best interest to strengthen and secure their future with excellent publications before entering into full service at NIH Review.

    “Scarpa’s biggest agenda seemed to be searching for ways to save money. Admirable. But I don’t think he put through a lot of positive fixes for the issues at hand”.

    DM, I think that Scarpa’s agenda of making more with less money was a lot more than admirable. Having entered NIH in mid 2005, in the middle of the funding bubble, he predicted what was coming next (globally) and anticipated mechanisms that placed the value of critical scientific contribution (i.e. serving in peer review) not in monetary reward(s) but within the sense of duty that authentic scientific leaders transpire when they serve and make apparent their role in enhancing the quality of the scientific process, securing its integrity and the future of biomedical science.

    I am old but my recent memories have not quite faded yet. And I still remember people complaining about the lack of proper economic retribution by NIH to their reviewers, how federal auditors were telling them (investigators serving in review panels) to secure their grants and not to go NIH to review etc etc…. Really SURREAL and totally out of place “sottovoce” demands in meetings (more or less public meetings)

    “Rome was not invented in one day” and, of course, Scarpa might not have put through all the fixes that he could have put if NIH had not been as politicized. But here are a few:

    1. From the start, an insistent, repeated general call for participation and initiatives. I don’t remember, in my years of dealing with NIH, any Director starting her/his tenure with such an approach. Participation and request for initiatives had been pretty much restricted to “special, beautiful people”, which contradicts, in some ways, with the open nature of science and its processes.
    2. Managed to instigate in his collaborators their creativity for the development of novel ways to make the review system more efficient and adaptable to the new times incorporating technologies never used before and that had been adopted by others in different settings all over the world.
    3. Came up with strategies to enlarge the number, pools and talented scientists willing to participate as reviewers.
    4. Set up compensation mechanisms for reviewers that reward their efforts and generosity to serve by facilitating timelines for submission of their own grants
    5. These are just a few of “his fixes” but the most important and admirable one is that he did not even attempt to do this on his own. He did it by instilling in all his collaborators the urgency for coming up with the best of themselves in the certainty: “Together we can make things better, if not much better”.

    Addendum. Who knows what is Scarpa cooking now in cold Cleveland other than drinking beer. Pretty sure that whatever he cooks is of quality and great taste: the taste of outstanding service to the community.

    Here is a thank you toast to Scarpa and his NIH team!.

  • physioprof says:

    anon June 9, 2012 at 7:36 am == Scarpa's mom.

  • anon says:

    Dear Physioprof and Drugmonkey,

    I can't compete with your sharpness and wittiness. Neither my son!. And yet, I want to make you the happiest and I know how to do it!.. Ready?

  • physioprof says:

    Ella Fitzgerald singing Cole Porter does, indeed, make me very happy!

  • rs says:

    Hi, Just heard that my R21 was unscored. This was my first ever submission to NIH. Now my question, is it worth to resubmit in next cycle after fixing the major comments from the summary statement, or try to get some more data and make it to a new R21 or R01? What I hear from my colleague who also have been a reviewer, that it is unlikely for reviewers to read your grant after it was triaged, it doesn't matter what you write there, so 90% of those grants doesn't make it to funding. any suggestions?

  • AcademicLurker says:

    Condolences.

    I'm not sure there are any hard and fast rules. Some folks will tell you it's a waste of time resubmitting a proposal that was traiged, but then my first RO1 went straight from triaged on the A0 to funded on the A1, so anything is possible.

  • physioprof says:

    It is ludicrous to think you can make this decision without seeing the summary statement. Once you have read the critiques and seen the criterion scores, you can make an informed decision about how to proceed.

  • rs says:

    Thanks, AL and PP.

Leave a Reply