Comrade PhysioProf recently noted another post on grant review scoring data from the NIGMS Director, Jeremy Berg. One of the comments over there from anon reviewer speculated that the Innovation criterion score is only poorly associated with the Overall Impact score because of reviewer confusion. The comment suggested reviewers struggle to define Innovation- as if they do not know what that means.
Nonsense. I replied as follows:
AR, I would suggest the "difficulty" reviewers have with the Innovation criterion is not confusion over what it really means. Rather it is *resistance* to the notion that Innovation should be more important than Approach and Significance. They just are not on board with this top-down emphasis of the NIH. So they strive to djinn up Innovation compliments for apps that are obviously lacking innovation because they like the approach and/or significance.
Right? Reviewers are not idiots. When you see some gibberish in the Innovation section of the critique written by your fellow reviewers you do not conclude they are fools. You conclude, quite rightly, that the reviewer liked an application that lacks any sign of innovation for other reasons.
The idea that NIH funded science should be all Innovation, all the time is idiocy. In the extreme. We'd never get anywhere without people doing the unglamorous work to follow up, verify, utilize, translate, generalize, extend and connect with the most innovative science.
Reviewers know this, which is precisely why the NIH obsession with innovation fails to translate to study section reviewing behavior.
added: I urge my readers to go over to the NIGMS Feedback Loop blog and comment. If you want to see more of these type of data out of other NIH Institutes and Centers, it seems obvious to me that a show of interest on the part of the NIH funded extramural research force (not just PIs, everyone) would be a good thing.