Have You Been Trained to Peer-Review?

Mar 27 2008 Published by under Careerism, Grantsmanship, NIH, Peer Review, Tribe of Science

Over at Medical Writing, Editing & Grantsmanship I noticed a commenter musing:

I'd be interested to know how many readers of this blog have actual formal training in the task of review (here I make a strong distinction from training for the task of editing). I will venture to say the answer is none. We learn to review through experience and the process of trial and error. End result is review tends to be a highly idiosyncratic activity where we can rarely predict with any degreee of certainty the outcomes of the peer review process.

Well I don't know about "formal" training, but I certainly received some informal training in manuscript review from a postdoctoral mentor. The commenter, Greg Cuppan, has a great point when it comes to grant review.


I am hoping that most readers' experience with manuscript review is similar to mine. In that during training (certainly as a postdoc) the mentor provides a scaled opportunity for trainees to learn paper reviewing. One approach is simply the journal-club type of approach in which the trainee(s) and mentor read over the manuscript and then meet to discuss strengths and weaknesses. A second approach might be for the mentor to simply assign the trainee to write a review of a manuscript the mentor has received, and then meet so that the mentor can critique the trainee's review.
[I should note here that I do not consider the sharing of the manuscript with the trainees to be a violation of confidentiality. The trainees, of course, should consider themselves bound to the same confidentiality expected of the assigned reviewer. I can imagine that this runs afoul of the letter of many editorial policies, not sure of the spirit of such policies at all journals. The one journal editor that I know fairly well is actually a major role model in the approach that I am describing here, fwiw.]
Ideally, the mentor then writes the final review and shares this review with the trainee. The trainee can then gain a practical insight into how the mentor chooses to phrase things, which issues are key, which issues not worth mentioning, etc. Over time the mentor might include more and more of the trainees critique in the review and eventually just tell the editor to pass the review formally to the trainee. I is worth saying that it is obligatory mentor behavior, in my view, for the mentor to note the help or participation of a trainee in the comments to editor. Something like "I was ably assisted in this review by my postdoctoral fellow, Dr. Smith". This is important mentoring by way of introducing your trainee to your scientific community, very similar to the way mentors should introduce their trainees to members of the field at scientific meetings.
I am not sure that "formal" training can do any better than this process and indeed it would run the risk of being so general (I am picturing university-wide or department-wide "training" sessions akin to postdoctoral ethics-in-science sessions) as to be useless.
In a second comment Cuppan notes:

What got me to your blog was my review of the NIH Enhancing review report. I too noted pp45-46 recommendation regarding training, yet find it curious that nothing is mentioned regarding what may prove to be useful review training. I suggest that what NIH really needs to do is develop a peer assessment process for grant proposals.
Based on the read of papers regarding the limitations to review one can readily conclude grant funding may involve a considerable amount of happenstance. Not a message members of this august community would want brought to the front.

In contrast to my training as a manuscript reviewer, I received no training as a grant application reviewer prior to receiving my first set of assignments. Well, in truth I did receive a summary statement or two from my own applications which is a very important first orientation to grant reviewing. It has it's drawbacks in that it is self-perpetuating of the bad habits, but it also helps to beat down some of the variability that Cuppan discusses over at MWE&G. Nevertheless, I did not ever so much as read any of the grant applications any of my mentors had for review. I am not aware of any other reviewing PIs that would do this and I have never considered for a second (until now) sharing any of my review load with trainees. My view is that this is most emphatically not part of the culture of scientific training, in contrast to the above mentioned points about manuscript review. So I agree with Cuppan that some degree of training in review of grant applications would go far to reduce a certain element of randomness in outcome.
I happen to think it would be a GoodThing if the NIH managed to do some degree of training on grant review. To be fair, they do publish a few documents on the review process and make sure to send those to all reviewers (IME, of course). I tend to think that these documents fall short and wish that individual study sections paid more attention to getting everyone on the same page with respect to certain hot button issues. Like how to deal with R21s. How to really evaluate New Investigators. What criteria for "productivity", "ambitiousness", "feasibility", "significance", "innovation", etc are really about for a given section. How to accomplish good score-spreading and "no you do not just happen to have an excellent pile" this round. Should we affirm or resist bias for revised applications?...
I could go on for days.
It is worth emphasizing that today's point is not driven by the fact that I want everyone to review grants my way. Not at all. It is indubitably the case that scientists are capable of meeting certain performance criteria, even if they do not agree with the underlying criteria. Indeed, part of effective grant review during the discussion is to make sure to "speak the language" of the other reviewers around the table. I make an effort, for example, to detail the degree to which an applicant has geeked-out over the implications of each experimental result and how these results might apply to the stated hypothesis. I do this because I know that this is important to a large part of the panel, even if it isn't a big deal in my own opinion. Similarly, I have no doubt I've heard panel members who do not usually show much acknowledgment of public health relevance struggle mightily to detail such features of applications they like. (Funny, their eyes seem to drift my way during these remarks.....) Now that CSR is hammering away about prioritizing "significance", reviewers are focusing on this issue.
Ultimately, I am struck by the consideration that much disagreement over a given application does not really rest on the facts, so to speak. It rests on differing interpretations over how to review. Meaning, I suppose, what weights to assign to the many, many criteria that contribute to review outcome. Also, how we are to define said criteria. It strikes me that a little more training of scientists in how to review applications would go a long way toward reducing the variance in review.
--
Update 3/28/08: A response from Guru at Entertaining Research blog

20 responses so far

  • PhysioProf says:

    I make an effort, for example, to detail the degree to which an applicant has geeked-out over the implications of each experimental result and how these results might apply to the stated hypothesis. I do this because I know that this important to a large part of the panel, even if it isn't a big deal in my own opinion.

    How about challenging them on their innovation-impeding creativity-killing douchebaggery instead of reinforcing it?

  • Neuro-conservative says:

    I would pay good money to watch PhysioProf in action (in full PhysioProf mode) on a study section.

  • juniorprof says:

    DM, I got much the same training on peer reviewing for journals that you did. In terms of grant reviewing all of my mentors shared pink sheets with the whole group for just about every grant and we would go over the process in terms of how to respond and trying to tell which criticisms needed to be addressed most urgently for the next submission. This gave some very useful insight into the process. My PhD lab was a fairly big one and the PI also did a within the lab workshop on grantsmanship in which he shared some insight on the review process and how he went about reviewing grants (as a senior study section member). Of course this was always a hypothetical type thing because he could not share any of the grants with us to go over a point by point review.
    We had some course work on grant writing but nothing nearly as useful as the little workshop our PI would put on for the lab. I was very lucky to have had that type of training.

  • You've got me thinking about finishing a post I've drafted about how I'm only really trained to do about 50% of my daily duties.
    I did have some experience as an asst prof with a colleague outside my area asking for interpretation on some mol biol stuff within a grant (with approval of the SRA/SRO). He then showed me his overall review in the context of my tech evaluations. It was a start but I really didn't learn, like you, until I was asked to ad hoc.
    I do think that some training should be done for new reviewers but I have trouble thinking of how to do it or how effective it might be. Many different reviewing styles, variations in emphasis - I think there would be a fistfight in simply coming up with a training module.

  • Hey juniorprof, you need to start a blog.

  • Schlupp says:

    I never got such informal training about reviewing papers from any supervisor. I just got assigned papers to review by journals, period. And it's the same for many other students and postdocs I know.
    Basically, I learn from the reviews I get myself. So, dear colleagues, if you want to get nice reviews from me, you know what to do.

  • juniorprof says:

    Abel, you ask and you shall receive...

  • DrugMonkey says:

    what Abel said juniorprof...

  • csrster says:

    No training for me. A little while after my first paper got published along came a letter from the editor asking if would be willing to review a paper. All I had to go on was my own experience with my own reviewer.

  • PhysioProf says:

    Reviewing papers ain't fucking rocket science. It's much easier to do a decent job than reviewing grants. And if you fuck up, the editors usually can keep it from skewing the outcome too much.
    One of the best things for training reviewers is making the other reviews available to all reviewers. Seeing how other people view the same paper is incredibly valuable. Those journals that do not do this are fucking stupid.
    Also, editors should tell reviewers if they think they have fucked up. (Maybe they do. My reviews are always completely on-point, incredibly insightful, and exceedingly fair, so I wouldn't know.)

  • Becca says:

    I'm in grad school now (thus, perhaps more 'coming up in the ranks' than many of the commenters) and I *did* get 'formal' training in peer-review, sort-of. Specifically, we have an interdisciplinary colloquium in which we had to review each others papers. Very valuable from an English-usage perspective, if nothing else.
    That said, I have *not* recieved the kind of informal mentored reviewing... at least not yet.
    Recently, my PI came out and asked the other grad student in our lab for input on a paper he was reviewing. Specifically, he came out with a "I think this paper is not mechanism focused enough for this journal" message and asked the other grad student basically to 'find things wrong with it'. I don't really see my PI putting a little footnote acknowledging the student either.
    I am torn. I want to be included in useful informal training in reviewing. I kinda like tearing apart poor articles, but I am not terribly excited about tearing up good articles... if they aren't suitable for the journal, that's fine and dandy and a good enough reason to turn it down, if you are very familiar with the journal (as my PI obviously is- he's done lots of reviewing for them in the past)... but I might feel (ethically) uncomfortable 'finding things wrong' with a good paper, knowing my prof had already made up his mind and was only looking for one kind of comment.
    As an aside, I'm not 100% sure why my PI went to the other student only and did not ask for my input at all. The paper in question did seem to have a bit of a relationship to his interests but not mine, so that could easily have been it. But perhaps it is worth pointing out, I'm not sure my PI would *know* if a paper he was reviewing related to my interests- I need to know how to chat up my professor better, maybe.
    How do I let my professor know? How much of it is my place to tell him?
    (sorry to turn your blog into an advice column, but since this just happened and I had been thinking about 'informal training in reviewing' I figured it was worth bringing up)

  • DrugMonkey says:

    Ahh, junior prof was a bit too subtle for me to catch right away, hopefully the rest of you are not as obtuse as me.
    http://juniorprof.wordpress.com
    appears to be open for bizziness...

  • CC says:

    DM, I got much the same training on peer reviewing for journals that you did. In terms of grant reviewing all of my mentors shared pink sheets with the whole group for just about every grant and we would go over the process in terms of how to respond and trying to tell which criticisms needed to be addressed most urgently for the next submission.
    I will concede this as a drawback to the "11 R01s / washboard abs / PhysioProf fuming with rage" model of funding: small labs (with PIs who care about developing trainees and trainees who care about and are capable of being developed, which is frequently not the case) can do a much better job of this than megalabs. I got that in grad school (although not in nearly as thorough a form as DM and JP describe) but not at all as a postdoc.

  • ecogeofemme says:

    I've gotten some of the informal training you describe, but I was also asked by an editor to review a paper before I had published anything myself (apparently the author suggested me). With my mentor's blessing, I did it but it was weird because I didn't really know how it should sound. My mentor never read what I wrote although we discussed my comments. Learn swimming by swimming, I guess.

  • I was obtuse raised to the 2nd power - I didn't see juniorprof's blog until I started getting hits from it.
    Now if only my lab folk would listen to me the same way.

  • acmegirl says:

    I have reviewed a paper under my PI's guidance. I did feel a little like, "When did I become a "peer"?", but it was a great experience. We both made our comments, discussed them and then he wrote them up and showed me what he'd written. Turns out my concerns were similar to his, and I caught something that he had overlooked. Now I have an idea what the tone should be, and feel confident that I can do a good job as a reviewer. He did add that note that I had assisted. I thought that was just due dilligence, but after reading some of the other comments, I'm now wondering if it's like selling my name to a tellemarketer.
    WRT Becca, "I kinda like tearing apart poor articles, but I am not terribly excited about tearing up good articles... if they aren't suitable for the journal, that's fine and dandy and a good enough reason to turn it down, if you are very familiar with the journal (as my PI obviously is- he's done lots of reviewing for them in the past)... but I might feel (ethically) uncomfortable 'finding things wrong' with a good paper"
    Though the paper I reviewed was interesting, I found flaws. We recommended substantial revision. If they do the revisions, the paper will be a much better piece of work. I don't think that's a bad thing, or unethical. Isn't that the point of peer review? Anyway, it's up to the journal editor to decide if a paper is appropriate.

  • msphd says:

    Great post topic.
    I think there should be more formal training, though I see what you mean about how departmental or programmatic involvement often makes things worse, and not better.
    I used to really enjoy journal clubs. I'm good at being critical and it helped feed that need for me. The flip side of this is the paralyzing fear of sending out manuscripts of your own for review, when you know how nasty (especially student) reviewers can be.
    Lately I find the journal clubs are mostly a waste of time at my level, since I've noticed that many PIs are critical of other people's papers, while they rationalize away the same sorts of looseness in equivalent publications from their own labs.
    As one student said to me the other day, "Nothing is bulletproof."
    But perhaps the most irritating thing to me is when students ask, "Well why didn't they do these other 20 things?"
    And the answer, from someone who knows better, is something like, "Those experiments can't be done instantaneously or for free."
    WRT training, someone senior needs to be there to inject a little reality.
    ...
    I had some exposure to reviewing papers with my advisors, and I saw more than one grant in both the writing and reviewing stages, while I was learning how to write my own papers.
    I've tried writing grants but still feel it's an area where guided practice would help more than 'training'.
    So I had only a handful of papers and grants 'under my belt', as it were, when I was asked to review a couple of manuscripts myself. It's much more daunting when you know someone's career hangs in the balance.
    Given how important this task is to our current style of research, I think it should be tracked more systematically, and taught.
    But I also think that writing and editing are not taught, and they should be.
    I like the idea of rewarding participation as a reviewer by making it part of career advancement.
    I think only then will training become part of the canon.

  • Gregory Cuppan says:

    Drug Monkey: I find posted comments on peer review very interesting. Net-net: response to my posed question on Medical Writing, Editing & Grantsmanship regarding formal training on practice of review is a categorical "no." The prevalent response is: �I had training�sort of.� Often the commentators indicated they had a rich form of mentoring (a rich and enviable form of edification). The issue with mentoring is we are exposed principally to the thinking of only an individual. We get all their good stuff but we also get their baggage. So most often we end up with a rich soaking that has an n = 1. My colleague, Philip has a good post on the topic of peer review http://brain.brainery.net/mcblog/?p=54.

  • PhysioProf says:

    I've noticed that many PIs are critical of other people's papers, while they rationalize away the same sorts of looseness in equivalent publications from their own labs.

    That's impossible; I'm sure you just misunderstood what was going on.

  • Eric says:

    As a graduate student, I've been fortunate enough to have a mentor who has had me review papers and grants extensively (at least 20, probably more) and then write critiques of them. My critique then forms the basis of my mentor's comments and I get to see the final product. This allows me to see nuances I may have missed, or ways to fine-tune my language so it is appropriate for a review.
    In the case of "peer-review" I think the only way to learn is by doing. I can imagine nothing worse than a class trying to teach me how to peer review a paper or grant.

Leave a Reply