The Feb 2010 edition of the Office of Extramural Research Nexus contains yet more explanation of the way reviewers are supposed to use the "overall impact" scoring field.
What we call the "Overall Impact" of the application is the compilation of the evaluation of the review criteria. As reviewers assess Overall Impact, they take into account the individual review criteria and provide an overall evaluation of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved. The following points provide some clarification between Significance and Overall Impact:
* Takes into consideration, but is distinct from, the core review criteria (significance, investigator(s), innovation, approach and environment).
* Is not an additional review criterion.
* Is not necessarily the arithmetic mean of the scores for the five scored review criteria.
* Is the synthesis/integration of the five core review criteria that are scored individually and the additional review criteria, which are not scored individually.
emphasis added but I could just bold the whole damn thing. This is supposed to help reviewers? More importantly (I presume and assert) it is supposed to help reviewers to act more consistently with each other?
This is exactly the sort of thing that drives me crazy about CSR and their complete and utter mess which is reviewer instruction.
This does not help me to resolve how this new approach is supposed to work. In fact, deciding to make this Overall Impact score detached from the individual scoring totally undercuts the point of the new system.
The way they keep harping about the Overall Impact communicates to me that this is no different whatsoever from the prior approach in which the Preliminary Score suggested by a reviewer was supposed to by a "synthesis/integration of the five core review criteria and the additional review criteria". All the hoopla about a new scoring approach was supposedly justified on the basis of re-orienting reviewer behavior away from obsession with the minutia of the experimental plan and towards Impact and Significance. Now they seem to be saying "yeah, never mind all that you are still free to express whatever balance you see fit". Whut?
Look. CSR. Can we talk? Reviewers, especially when just starting out, are happy to do the job you ask of them. They are looking to do a good job and they are very smart people who are able to follow instructions. If you give vague and often contradictory sources of information, they are left with their own biases of circumstance. Mostly having to do with how they have been treated by study sections over the initial years of their own grant seeking. Modified by what the geezertariat in their home department has to say about proper grant reviewing behavior. Leavened by random insanity. In a narrow subset of cases, properly educated on grant review approach because they are devotees of YHN.
This all adds up to variance. Which is great for some aspects of review- we want some diversity on the science part. But what we don't want is a lot of diversity over essentially irrelevant stuff like grantsmithing decisions over what to emphasize and what to not emphasize in the application. Over hard and fast StockCritique "rules" like
"too many notes" "did not exhaustively review potential pitfalls".
Do you think they ever test whether their initiatives, instructions and what not actually change reviewer behavior?