I'd be interested to know how many readers of this blog have actual formal training in the task of review (here I make a strong distinction from training for the task of editing). I will venture to say the answer is none. We learn to review through experience and the process of trial and error. End result is review tends to be a highly idiosyncratic activity where we can rarely predict with any degreee of certainty the outcomes of the peer review process.
Well I don't know about "formal" training, but I certainly received some informal training in manuscript review from a postdoctoral mentor. The commenter, Greg Cuppan, has a great point when it comes to grant review.
I am hoping that most readers' experience with manuscript review is similar to mine. In that during training (certainly as a postdoc) the mentor provides a scaled opportunity for trainees to learn paper reviewing. One approach is simply the journal-club type of approach in which the trainee(s) and mentor read over the manuscript and then meet to discuss strengths and weaknesses. A second approach might be for the mentor to simply assign the trainee to write a review of a manuscript the mentor has received, and then meet so that the mentor can critique the trainee's review.
[I should note here that I do not consider the sharing of the manuscript with the trainees to be a violation of confidentiality. The trainees, of course, should consider themselves bound to the same confidentiality expected of the assigned reviewer. I can imagine that this runs afoul of the letter of many editorial policies, not sure of the spirit of such policies at all journals. The one journal editor that I know fairly well is actually a major role model in the approach that I am describing here, fwiw.]
Ideally, the mentor then writes the final review and shares this review with the trainee. The trainee can then gain a practical insight into how the mentor chooses to phrase things, which issues are key, which issues not worth mentioning, etc. Over time the mentor might include more and more of the trainees critique in the review and eventually just tell the editor to pass the review formally to the trainee. I is worth saying that it is obligatory mentor behavior, in my view, for the mentor to note the help or participation of a trainee in the comments to editor. Something like "I was ably assisted in this review by my postdoctoral fellow, Dr. Smith". This is important mentoring by way of introducing your trainee to your scientific community, very similar to the way mentors should introduce their trainees to members of the field at scientific meetings.
I am not sure that "formal" training can do any better than this process and indeed it would run the risk of being so general (I am picturing university-wide or department-wide "training" sessions akin to postdoctoral ethics-in-science sessions) as to be useless.
In a second comment Cuppan notes:
What got me to your blog was my review of the NIH Enhancing review report. I too noted pp45-46 recommendation regarding training, yet find it curious that nothing is mentioned regarding what may prove to be useful review training. I suggest that what NIH really needs to do is develop a peer assessment process for grant proposals.
Based on the read of papers regarding the limitations to review one can readily conclude grant funding may involve a considerable amount of happenstance. Not a message members of this august community would want brought to the front.
In contrast to my training as a manuscript reviewer, I received no training as a grant application reviewer prior to receiving my first set of assignments. Well, in truth I did receive a summary statement or two from my own applications which is a very important first orientation to grant reviewing. It has it's drawbacks in that it is self-perpetuating of the bad habits, but it also helps to beat down some of the variability that Cuppan discusses over at MWE&G. Nevertheless, I did not ever so much as read any of the grant applications any of my mentors had for review. I am not aware of any other reviewing PIs that would do this and I have never considered for a second (until now) sharing any of my review load with trainees. My view is that this is most emphatically not part of the culture of scientific training, in contrast to the above mentioned points about manuscript review. So I agree with Cuppan that some degree of training in review of grant applications would go far to reduce a certain element of randomness in outcome.
I happen to think it would be a GoodThing if the NIH managed to do some degree of training on grant review. To be fair, they do publish a few documents on the review process and make sure to send those to all reviewers (IME, of course). I tend to think that these documents fall short and wish that individual study sections paid more attention to getting everyone on the same page with respect to certain hot button issues. Like how to deal with R21s. How to really evaluate New Investigators. What criteria for "productivity", "ambitiousness", "feasibility", "significance", "innovation", etc are really about for a given section. How to accomplish good score-spreading and "no you do not just happen to have an excellent pile" this round. Should we affirm or resist bias for revised applications?...
I could go on for days.
It is worth emphasizing that today's point is not driven by the fact that I want everyone to review grants my way. Not at all. It is indubitably the case that scientists are capable of meeting certain performance criteria, even if they do not agree with the underlying criteria. Indeed, part of effective grant review during the discussion is to make sure to "speak the language" of the other reviewers around the table. I make an effort, for example, to detail the degree to which an applicant has geeked-out over the implications of each experimental result and how these results might apply to the stated hypothesis. I do this because I know that this is important to a large part of the panel, even if it isn't a big deal in my own opinion. Similarly, I have no doubt I've heard panel members who do not usually show much acknowledgment of public health relevance struggle mightily to detail such features of applications they like. (Funny, their eyes seem to drift my way during these remarks.....) Now that CSR is hammering away about prioritizing "significance", reviewers are focusing on this issue.
Ultimately, I am struck by the consideration that much disagreement over a given application does not really rest on the facts, so to speak. It rests on differing interpretations over how to review. Meaning, I suppose, what weights to assign to the many, many criteria that contribute to review outcome. Also, how we are to define said criteria. It strikes me that a little more training of scientists in how to review applications would go a long way toward reducing the variance in review.
Update 3/28/08: A response from Guru at Entertaining Research blog