You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.
This seems well-measured and an important point. Wouldn't a bigger problem be if all of reviewers exactly agreed about rankings? That would indicate some toxic groupthink, right?
— Terry McGlynn (@hormiga) March 13, 2018
Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.
Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.
Do you really want perfect fidelity?
Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?
Of course not.
You want the variability in NIH Grant review to work in your favor.
If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?
Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.
Anyway, I'd like to see more chit chat on the implicit question from the last post.
No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.
The only debate here is how much variability we expect there to be. How much precision do we expect in the process.
Well? How much reliability in the system do you want, Dear Reader?
*ok, maybe sometimes. but always?