My readers will recall that I have long expressed difficulty crediting assertions that assistant professors are poorer reviewers of grant proposals.
A recent news bit in NatureJobs describes a study (I think it was a conference presentation from the wording) of paper review quality as a function of years spent reviewing.
Michael Callaham, editor-in-chief of the Annals of Emergency Medicine in San Francisco, California, analysed the scores that editors at the journal had given more than 1,400 reviewers between 1994 and 2008. The journal routinely has its editors rate reviews on a scale of one to five, with one being unsatisfactory and five being exceptional. Ratings are based on whether the review contains constructive, professional comments on study design, writing and interpretation of results, providing useful context for the editor in deciding whether to accept the paper.
The average score stayed at roughly 3.6 throughout the entire period. The most surprising result, however, was how individual reviewers' scores changed over time: 93% of them went down, which was balanced by fresh young reviewers coming on board and keeping the average score up. The average decline was 0.04 points per year.
I am not surprised on bit. Nor should anyone who thinks for just a half a second about how real people actually behave. Exhaustion and cynicism have a tendency to replace energy, enthusiasm and the fear of looking like an intellectual lightweight when it comes to reviewing. Sorry but if you really believe that people in aggregate do not suffer from these tendencies you are utterly out to lunch and you need to spend some time interacting with actual people. Seriously. If you think that you are somehow unique...HAHAHAHAHAHA!
Anyway, the data that are being described seem to confirm my perspective in one specific context of interest to our readers. Editors should take note of this! The take home message is that they should continue to work hard to cast a wide net, to involve new reviewers as much as possible and not to stick with the same old folks for decades.
Of course, some editors abandon all pretense of actually being a scientist any time it is suggested that the geezeriat might not be all-that in comparison with young guns (of 45!!!).
"This is a quantitative review, which is fine, but maybe a qualitative study would show something different," says Paul Hébert, editor of the Canadian Medical Association Journal in Ottawa. A thorough review might score highly on the Annals scale, whereas a less thorough but more insightful review might not, he says. "When you're young you spend more time on it and write better reports. But I don't want a young person on a panel when making a multi-million-dollar decision."
"Multi-million-dollar decision"? Grant funding I deduce? Game on, my friend, game on.
This is, you will notice, the same old crap*. An assertion that young and/or assistant prof level scientist is deficient in reviewing grants. This theme was enthusiastically adopted by Toni Scarpa, head of the NIH's reviewing unit, the CSR. We have heard all sorts of complaints and efforts (often covert within CSR) to reduce the number of assistant professors participating on review panels. Seldom have we seen anything like a coherent argument for why assistant professor reviewers are to blame for [insert poorly specified, seemingly negative grant review outcome]. Never have we seen any data backing up the assertion. Personal anecdotes, if offered, never survive the question of review experience, which of course is a Catch22 if you prevent younger people from serving as reviewers. Never have we seen a discussion of my contention that the relatively few assistant professors on panels (10% of reviewers was the high water mark, I believe), mostly as ad hoc reviewers who see fewer grants (note that when CSR does offer figures they do not present them by percent of reviews), cannot have a major role in eventual grant disposition. The numbers don't add up.
And now we have some data on the table to suggest that peer-review quality goes down over time. For damn sure it suggests that the best approach is to cast a wide net and to constantly seek new blood. And no, I don't see where grant review is somehow different from paper review in this.
It also suggests that one new CSR policy should be reconsidered. I wasn't all that impressed by the new 6-year, every-other-round option for being a permanent member of a CSR study section (default is 4 yr, every-round). My argument in that post focused on the continuity of reviewing revised applications but I also am concerned about reviewer burnout. Heck, I even think four year terms might be a bit too long.
So when you hear this assistant professor bashing in person, DearReader, do me a favor would you? Get the whiner to flesh out the complaint. Ask for what they base this on. And drop me a line or a comment. I'm curious.
*sure the guy might have gone on for chapter and verse and the journalist boiled it down to this pap. but I doubt it. this story is just too familiar...