I thought one of the Twitts that I follow was intentionally baiting me by linking a recent editorial in Nature Neuroscience. Turns out I am very pleasantly surprised by the degree of balance. For background, this editorial takes up the hoopla over the practice of the NIH in using out-of-initial-priority-score exceptions (aka "pickups") to fund investigators who have never held a major NIH research grant before. I had observations here and here. To summarize, I am usually disappointed with the lack of understanding of the NIH grant business displayed by media accounts of this particular nuance of the funding picture.
Nature Neuroscience appears to grasp nuance, bravo.
grant writing is a skill that generally improves with experience. However, other factors also figure into grant peer review scores and all of them favor established investigators. In deciding how likely the proposal is to successfully yield data, reviewers are liable to consider the prior track record of the investigator, preliminary data, which is much more difficult to obtain when starting a new lab, and personal connections, which established investigators have had more time to build.
Nice. Some recognition that there miiiiiggghhht just be some things about New Investigators' treatment that are not necessarily pertinent to grant quality...
The new, shorter application could theoretically even the playing field, as all investigators now have to adapt to the new format, but with less information included to form a judgment, information outside of the proposal, such as an investigator's previous track record, is likely to exert a stronger influence.
Heck yeah it is. This is a consistent refrain of both PP and YHN ever since this was first raised.
For more established investigators, an extra round of review may be painful, but for a new investigator on the verge of running out of start-up funds, it could mean shutting down projects, firing personnel or, ultimately, the loss of one's job.
Exactly. New Investigators are less error tolerant across the board of all aspects of grant review and funding because they have nothing else on which to rely. The more senior crowd can whinge all they want about downsizing their labs and loss of momentum and all that. There is a big difference between losing 1 of 3 techs or 2 of 4 postdocs and losing (or never having obtained) 1 of 1 of either.
Although the system may be biased against young investigators, the bigger question is whether it is truly necessary to remedy this inequality. Historically, funding has always been more difficult for young investigators to acquire and it could be argued that there is no special need to intervene now.
Here I think they go off the rails a bit, even though they recover a bit in the following comments. Still, they seem to miss the central point that bias against New Investigators has always been viewed as a problem, not an intentional feature. Many NIH initiatives (e.g., the R29 FIRST award, the New Investigator checkbox and related reviewer instructions) have been launched to fix the problem.
Formalizing the NIH's discretion to fund R01s from young investigators has been criticized on the grounds that it circumvents the current peer review system, subverting the NIH mission to fund 'the best science'. Seen superficially, it can appear unfair; for every grant awarded to an early-stage investigator below the payline, another grant with a better score remains unfunded. However, these assertions rest on the assumption that the scores accurately reflect the quality of all proposals. This seems an unrealistic notion, given the known limitations of the current system.[emphasis added]
This is what has me so enamored of the editorial. I so rarely see this pointed out by any officialdom of the NIH or in any popular media coverage of the issue. I tend to suspect that the NIH is vested in not admitting this central fact, i.e., that the initial priority score is not a perfect reflection of unbiased, objective quality of the proposal.
It is not. As I say repeatedly, whenever you have actual people making decisions about things, you have bias. Pretending that people are not biased evaluators is just plain stupid. (Plus, it reflects a willing ignorance of a wealth of psychological studies and that is just annoying.)
The only real solution to personal bias in judgment is to put mechanisms in place which tend to even out or counter biases as far as they can be identified.