Ask DrugMonkey: How do we focus the reviewer on 'Innovation'?

Mar 18 2014 Published by under Fixing the NIH, Grant Review, NIH, NIH funding

As you are aware, Dear Reader, despite attempts by the NIH to focus the grant reviewer on the "Innovation" criterion, the available data show that the overall Impact score for a NIH Grant application correlates best with Significance and Approach.

Jeremy Berg first posted data from NIGMS showing that Innovation was a distant third behind Significance and Approach. See Berg's blogposts for the correlations with NIGMS grants alone and a followup post on NIH-wide data broken out for each IC. The latter emphasized how Approach is much more of a driver than any other of the criterion scores.

This brings me to a query recently directed to the blog which wanted to know if the commentariat here had any brilliant ideas on how to effectively focus reviewer attention on the Innovation criterion.

There is a discussion to be had about novel approaches supporting innovative research. I can see that the Overall Impact score is correlated better with the Approach and not very well with the Innovation criterion score. This is the case even for funding mechanisms which are supposed to be targeting innovative research, including specific RFAs (i.e., not only the R21).

From one side, it is understandable because reviewers' concerns over the high risk associated with innovative research and lack of solid preliminary data. But on the other side, risk is the very nature of innovative research and the application should not be criticized heavily for this supposed weakness. From my view, for innovative research, the overall score should be correlated well with Innovation score.

So, I am wondering whether the language for these existing review criteria should be revised, whether additional review criterion instructing reviewers to appropriately evaluate innovation should be added and how this might be accomplished. (N.b. heavily edited for anonymity and other reasons. Apologies to the original questioner for any inaccuracies this introduced -DM)

My take on NIH grant reviewer instruction is that the NIH should do a lot more of it, instead of issuing ill-considered platitudes and then wringing their hands about a lack of result. My experience suggests that reviewers are actually really good (on average) about trying to do a fair job of the task set in front of them. The variability and frustration that we see applicants express about significantly divergent reviews of their proposals reflects, I believe, differential reviewer interpretation about what the job is supposed to be. This is a direct reflection of the uncertainty of instruction, and the degree to which the instruction cannot possibly fit the task.

With respect to the first point, Significance is an excellent example. What is "Significant" to a given reviewer? Well, there is wide latitude.

Does the project address an important problem or a critical barrier to progress in the field? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

Well? What is the reviewer to do with this? Is the ultimate pizza combo of "all of the above" the best? Is the reviewer's pet "important problem" far more important than any sort of attempt to look at the field as a whole? For that matter, why should the field as a whole trump the Small Town Grocer interest...after all, the very diversity of research interests is what protects us from group-think harms. Is technical capability sufficient? Is health advance sufficient? Does the one trump the other? How the hell does anyone know what will prove to be a "critical" barrier and what will be a false summit?

To come back to my correspondent's question, I don't particularly want the NIH to get more focused on this criterion. I think any and all of the above CAN represent a highly significant aspect of a grant proposal. Reviewers (and applicants) should be allowed to wrangle over this. Perhaps even more important for today's topic, the Significance recommendations from NIH seem to me to capture almost everything that a peer scientist might be looking for as "Significance". It captures the natural distribution of what the extramural scientists feel is important in a grant proposal.

You may have noticed over the years that for me, "Significance" is the most important criterion. In particular, I would like to see Approach de-emphasized because I think this is the most kabuki-theatre-like aspect of review. (The short version is that I think nitpicking well-experienced* investigators' description of what they plan to do is useless in affecting the eventual conduct of the science.)

Where I might improve reviewer instruction on this area is trying to get them to be clear about which of these suggested aspects of Significance are being addressed. Then to encourage reviewers to state more clearly why/why not these sub-criteria should be viewed as strengths or lack thereof.

With respect to second point raised by the correspondent, the Innovation criterion is a clear problem. One NIH site says this about the judgment of Innovation:

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

The trouble is not a lack of reviewer instruction, however. The fact is that many of us extramural scientists simply do not buy into the idea that every valuable NIH Grant application has to be innovative. Nor do we think that mere Innovation (as reflected in the above questions) is the most important thing. This makes it a different problem when this is co-equal with criteria for which the very existence as a major criterion is not in debate.

I think a recognition of this disconnect would go a long way to addressing the NIH's apparent goal of increasing innovation. The most effective thing that they could do, in my view, is to remove Innovation as one of the five general review criteria. This move could then be coupled to increased emphasis on FOA criteria and an issuance of Program Announcements and RFAs that were highly targeted to Innovation.

For an SEP convened in response to an RFA or PAR that emphasizes innovation....well, this should be relatively easy. The SRO simply needs to hammer relentlessly on the idea that the panel should prioritize Innovation as defined by...whatever. Use the existing verbiage quoted above, change it around a little....doesn't really matter.

As I said above, I believe that reviewers are indeed capable of setting aside their own derived criteria** and using the criteria they are given. NIH just has to be willing to give very specific guidance. If the SRO / Chair of a study section make it clear that Innovation is to be prioritized over Approach then it is easy during discussion to hammer down an "Approach" fan. Sure, it will not be perfect. But it would help a lot. I predict.

I'll leave you with the key question though. If you were to try to get reviewers to focus on Innovation, how would you accomplish this goal?

___
*Asst Professor and above. By the time someone lands a professorial job in biomedicine they know how to conduct a dang research project. Furthermore, most of the objections to Approach in grant review are the proper province of manuscript review.

**When it comes to training a reviewer how to behave on study section, the first point of attack is the way that s/he has perceived the treatment of their own grant applications in the past***. The second bit of training is the first round or two of study section service. Every section has a cultural tone. It can even be explicit during discussion such as "Well, yes it is Significant and Innovative but we would never give a good score to such a crappy Approach section". A comment like that makes it pretty clear to a new-ish reviewer on the panel that everything takes a back seat to Approach. Another panel might be positively obsessed with Innovation and care very little for the point-by-point detailing of experimental hypotheses and interpretations of various predicted outcomes.

***It is my belief that this is a significant root cause of "All those Assistant Professors on study section don't know how to review! They are too nitpicky! They do not respect my awesome track record! What do you mean they question my productivity because I list three grants on each paper?" complaining.

12 responses so far

  • iGrrrl says:

    First, a hobbyhorse. I intensely dislike how the NIH Innovation criteria are stated. An application cannot challenge or seek. (Pro tip: Do not ascribe action to things that cannot act. Your project cannot have an objective. You have an objective that you hope to achieve through that project. /end hobbyhorse)

    NSF has been struggling with reviewer conservatism in the face of the agency's stated goal of funding transformative research. Admittedly, "transformative" is a high bar, but even with the definition I've linked to, reviewers aren't sure what to make of it. The Overall Impact score can be thought of resting on the Significance coupled with the likelihood of success. In reviewers' brains, the likelihood of success primarily rests on the other four criteria, with Approach as the most important, and Innovation as a possible negative.

    Many applicants and reviewers view Innovation as only technical innovation, but if you want to do epidemiology, getting innovative with your statistical approaches is not going to go well in review. What about conceptual innovation, disrupting a paradigm? Another common problem is starting the Innovation section with, "This is innovative because…" That shows that you assume that your reviewer shares your background and biases, and assume that they will immediately know the answer to the "innovative as compared to what?" question. Like all parts of the proposal, Innovation should presented as an argument.

  • rxnm says:

    Be innovative!

    Also, try to discover something surprising.

    And buy low / sell high.

  • Joe says:

    "If you were to try to get reviewers to focus on Innovation, how would you accomplish this goal?"
    First, spell it out clearly in the Innovation section of the proposal. Next, harp on it in your personal statement in the biosketch, harp on it on the specific aims page, and in your descriptions of expected results, mention how great an advancement it will be when you discover these things.

    I was recently on a panel and had 2 applications that were truly innovative. One developed a new technique and used it to do discovery-type experiments on highly significant diseases. That one got an outstanding score. The other used a drug and discovered some very interesting and unexpected things about certain molecular processes. That one got hammered on Approach, and got only an excellent score.

    I generally find Innovation to be something that can be a big plus for the score, but if standard methods work fine and the proposal will address a significant issue, nobody cares about innovation. It can be a big plus but can't be a big minus.

  • The Other Dave says:

    I think the problem is that Innovation is a slave to Significance. A totally cool new way to discover the same old thing isn't Significant, although it might be innovative. And a totally amazing ambitious potentially paradigm-shifting possible result is still Significant even if the approach isn't Innovative.

    I personally don't mind that Significance and Approach are the most highly correlated with proposal success. That makes sense to me. When I read a proposal, I try to figure out: 1) whether the applicants are trying to accomplish anything worthwhile, and 2) how likely they are to do that. beyond those two things, I really don't care.

    But I'll play along. If NIH really wants to encourage innovation, it needs to better define it. Their definition as posted above is basically a re-hashing and mish-mashing of Significance ("Does the application challenge and seek to shift current research or clinical practice paradigms...") and Approach ("utilizing... approaches or methodologies, instrumentation, or interventions? Are the ... approaches or methodologies, instrumentation, or interventions... ...application of theoretical concepts, approaches or methodologies, instrumentation, or interventions...")

    Better yet, NIH should just drop it as a criterion. Then we could all stop trying to write bullshit 'Innovation' sections that reviewers will ignore anyway because they don't understand it any better than the rest of us.

  • Davis Sharp says:

    Significance: 3. This grant proposes to develop an early detection method for pancreatic cancer. If successful, this will lead to earlier intervention and higher survival rates.

    Innovation: 1. No-one has ever proposed using psychics to diagnose pancreatic cancer.

    Approach: 9. This shit is fucked up.

    Overall Impact: 4. Because I don't want to look like the bad guy.

    But seriously, ToD's two review criteria are all that are necessary.

  • Pinko Punko says:

    Haha.

    DS, if that grant is a 4, I'd love to see that panel's priority scores/percentiles.

    I got a 12!!!!!!!!! 40%ile 🙁

  • physioprof says:

    The latter emphasized how Approach is much more of a driver than any other of the criterion scores.

    After long experience with the criterion score system, I have come to the conclusion that criterion scores are worse than useless.

    First, they don't drive overall impact scoring; rather, it is overall impact scoring that drives criterion scoring. Reviewers decide on their overall impact scores taking the grant as a whole, and then make up criterion scores to fit the overall impact. The reason approach and significance scores look like they "drive" overall impact is because the underlying substance of these two criteria is what really matters about a grant.

    Second, they can be very misleading due to lack of recalibration of criterion scores between discussed and undiscussed applications. The only real utility of criterion scores is their relative value within review of a single application: if your significance is much better than your approach, then maybe you have a shot on resubmission, but if your approach is much better than your significance, then you're fucked.

    Criterion scores should be eliminated.

  • rxnm says:

    yeah, let's make grant writing even more like international figure skating. pp is exactly right...all this fuzzy "artistic merit" bullshit is a toilet for post-hoc rationalizations for "i liked/disliked it" or a wide spectrum of positive or negative biases.

  • drugmonkey says:

    I agree about the post-hoc rationalization, indeed I think the entire critique is an exercise in confirmatory writing at times. I could see no reason for giving pseudo quantification when the overall impact score was explicitly said to have no relationship to the criterion scores. Doublespeak.

    But post hoc analysis of why one came to a conclusion via Gestalt is not necessarily wrong. Just....limited and imprecise.

  • Anonymouse says:

    "How do we focus the reviewer on 'Innovation'?"

    Redefine the problem to: "how to avoid making the reviewer's senile brain uncomfortable with genuine innovation, while giving him some rationale to applaud the proposal as innovative".

  • AcademicLurker says:

    yeah, let's make grant writing even more like international figure skating.

    Who gets to be Tonya Harding?

  • […] Voracious Worm Evolves to Eat Biotech Corn Engineered to Kill It The Real Issue That Vaccine Truthers Like Jenny McCarthy Should Be Focusing On Just a reminder, there are more eukaryotes than just metazoans The Foolish Anti-Vax Cause Ask DrugMonkey: How do we focus the reviewer on ‘Innovation’? […]

Leave a Reply