CSR's "College of Reviewers" hits the streets

Dec 22 2009 Published by under Grant Review, NIH

From writedit we receive the tipoff to a new initiative of NIH's Center for Scientific Review. The Press Release says:

CSR is sending invitations to about 2,500 members of the scientific community with a strong commitment to peer review and experience serving as reviewers... Joining the college will allow reviewers who find it difficult to manage the travel and commitment of being a regular reviewer to remain engaged in peer review...College reviewers primarily will provide written or "mail-in" critiques and be involved in two-stage reviews, which have successfully assessed thousands of special sets of applications, such as the Transformative R01 and Challenge grant applications as well as groups of translational applications and small business applications.

Oh I think I will be taking this up a time or two. Initial observations:
-just when they were reducing the ad hoc participation on regular panels and increasing the load (thereby broadening the "fit") of regular reviewers...this?
-all talk of impact and significance for regular review is rolled back, these initial reviewers will be expected to nitpick the design and methods, just like before!
-feedback from those who sat on the second stage of the Transformative and Challenge reviews this summer sure wasn't all that positive..but then CSR seems to frequently steam forward with great enthusiasm on changes where I can't seem to find anyone who is in favor of them.

18 responses so far

  • I call I get to be Dean Wormer!!!

  • Nat says:

    Seems appropriate that a postdoc could be Marmalard. DIBS!

  • pinus says:

    I wonder if this means that they are going to amp up the 2-stage review system to all R01's.

  • JAT says:

    Everything absurd done at CSR is under direct instruction of that directory lunatic. No review system is perfect, but what we had before was certainly not as broken as what we have and heading towards. Too many changes (many of them quite stupid) too fast. If any one is unsatisfied with where we are moving towards, we the members of the scientific community need to raise concerns louder and more effectively. Letting your congressional representatives know about these concerns will probably a good start.

  • smitha says:

    " What we had before was not as broken....."
    Sure, was not as broken for you !!!. Tell us your name and let us go and check for ourselves.

  • bikemonkey says:

    smitha, Are you saying that whether one is successful or not within the CSR review system determines whether it is broken or not?

  • smitha says:

    No Bikemonkey. I am not saying that. What I am saying is that people missing the status quo are, usually but not always, those who are gaining "extrabenefits" and privileges from it and don't want to loose their status. That's all.

  • bikemonkey says:

    Sure, smitha, but some also worry that the changes from status quo will entrench prior problems that were leading to inequity in the first place.

  • smitha says:

    Bikemonkey,
    Could you please give some examples of "prior problems leading to inequity in the first place"?.

  • bikemonkey says:

    Sure, we can start with the excessive focus on proving "feasibility" which slants against the less well established investigator. I think the OP is right in that the recent change to focus on the significance of the ideas was good, but farming out the minutia dissection as part of this first round "College" puts it right back into the system.

  • JAT says:

    Dear Smitha...There is no need to sound off so bitterly and hostile. I am fine as long as you don't cry foul with the new and evolving review system down the road. I have plenty share of my ups and downs with the "old" system throughout. As I said, the old system ain't perfect (and I am all for "real" improvement), but the so called evolving system we are seeing so far has created a lot of new problems that affect majority of the people (self included). I am voicing my opinion as both an applicant and a reviewer. For example, if I had followed EXACTLY what CSR asks me to do as a reviewer (in terms of critique format and how an application should be discussed at the meeting eg), the applicants will not get a fair share (in terms of feedback, not the overall evaluation process). That is why I don't follow their restricted format. I owe it to the applicant who spends countless hours slaving over the grant to give it an informative feedback and vigorous discussion whenever allowed so that if it has to come back, the PI knows exactly why and how to improve it.

  • smitha says:

    @JAT,
    I absolutely agree and support the second part of your post "If any one is unsatisfied with where we are moving towards, we the members of the scientific community need to raise concerns louder and more effectively. Letting your congressional representatives know about these concerns will probably a good start."
    And that is precisely because, as you said, no review system is perfect and will never be.
    I apologize for my rough spontaneous answer. The problem is that I do not consider fair to place all the responsibilities and effects on just one person. In this case, the directory lunatic. He might be so; however, I think that the participation of the community on the changes has been rather overwhelming to give him all the credit ( whether good or bad)

  • JAT says:

    Dear Smitha: Well said and it is all cool! But that director is still a lunatic! Hee Haw. Happy holidays!

  • smitha says:

    JAT,
    Sorry again. I did not mean to be bitter and hostile. It was just an spontaneous reaction. I post my last message before reading your last one.
    IMHO is great that you don't follow their strict format on behalf of the applicant. I wish I had you as a reviewer. Your approach is, in my view, the one to promote and that is to look at the "spirit of the review" and being able to overcome the limitations of the format. That means a lot to me.
    Sorry again.

  • Anonymous says:

    Bikemonkey,
    I had not seen your last post (#10)
    "proving feasibility which slants against the less well established investigator". I think it depends very much on what "proving feasibility" might mean for the reviewer(s). Prior publications (those with a robust publication history) ?? versus a well-thought methodological approach and alternatives (if it doesn't work) in less well established researchers ??. I don't think there are absolute objective ways to overcome this subjectivity ( neither in the old or new system). It seems to me that the two-stage reviews, if I understand it correctly, provides for a fairer and more accurate assessment of a specific project in terms of feasibility. Not only in predicting success for those specific scientific objectives (first stage with specialists in the field) but also in potentially successful ramifications (in a broader sense) for health sciences ( 2nd stage with generalists, so to speak). I would consider and evaluate feasibility from that dual perspective.
    "but farming out the minutia dissection as part of this first round "College" puts it right back into the system."
    Bikemonkey, it does not have to be a "minuta dissection". I thought that, as a community, we had overcome that tendency and had achieved a reasonable grade of consensus on the emphasis on "significance and potential for advancing the field". But I could be wrong.

  • microfool says:

    JAT:

    That is why I don't follow their restricted format. I owe it to the applicant who spends countless hours slaving over the grant to give it an informative feedback and vigorous discussion whenever allowed so that if it has to come back, the PI knows exactly why and how to improve it.

    From where does this debt arise? Was there ever an explicit function assigned to reviewers to give feedback that would help a reviewer improve their application?
    I think that this role is a noble one to take on, and useful in principle for the community of scientists that exist around the particular study section.
    However, maybe one reason CSR doesn't support it is that it is inefficient and beyond the scope of the need as defined for the government work of reviewing the scientific merit of applications. Time spent on generating this advice may actually undermine the primary role of the study section meeting.
    CSR's charge is to manage the review of the scientific rankings and deliver a set of applications with an evaluation of scientific merit. CSR receives an ever-increasing number of applications, but has to deal with them with an ever-decreasing amount of real dollars( though sometimes flat).
    The charge to reviewers is to make that evaluation of scientific merit. Additional "help" to the applicants in the form of advice on what to focus on, how to craft an argument, or who to collaborate with takes a lot of time if it occurs during an in-person meeting. Further, this additional "help" may not be that helpful, as the next time the application comes in, it may be reviewed by different reviewers, a different panel, and the scientific climate may have changed in key ways.
    Thus, one can imagine CSR management as viewing time spent on generating advice to applicants as harmful to the primary goals of peer review.

  • qaz says:

    microfool - I review grants for the same reason that I review papers. I am trying to improve the scientific work of our community. For that reason, when given a paper or proposal to review, I take the time to determine if it could be improved and how it could be improved.
    I have reviewed papers where the paper came back three to four times and got better every time. The initial paper was weak, the results incompletely supported, and the text didn't communicate well. The final paper has had a huge impact. I have seen grants come back with better experiments and better questions.
    And yes, I have been on the other side too. I don't have a problem with improving papers and grants through several cycles. I have a big problem with turning papers and grants into lotteries where you "do your best, get a score, and win or lose".
    Personally, I don't think that grant review is particularly necessary for identifying useful projects to fund. I could tell you in 30 seconds per proposal whether it had a chance of working or not. In my experience, about 50-75% of (NIH, NSF, and private) proposals I've seen have a reasonable chance of producing good science.
    PS. I agree with JAT. I don't follow the bullet point rules because the first time I did, I found it very difficult to communicate subtle and complex issues (such as "this is good, but has these problems, you have to choose where you want to go"). Also, I found it difficult to use the bullet points to lead discussion at study section when it came to be my turn.

  • JAT says:

    Dear microfool- I guess I am a fool in a way that I would hope that when my application gets reviewed, my reviewers will at least have the hearts to write down sufficient information for me to know clearly what and why he or she dislikes something in my grant. I do have the choice of taking it or not, but I prefer not misunderstanding where the dissatisfaction comes from. Often than not, the "strict" bullet points can not deliver the message well, as qaz said. A slightly more informative format than the bullet points also help other reviewers understand why scoring is far apart. Although this is less problematic as reviewers talk face-to-face and have a chance to iron things out. I dread for the day when CSR gets rid of in-person review meeting all together. Finally, you are right, I do not owe anyone to do what I do as a reviewer. But if I wish for the same when it comes to my turn, I must do my part and then hope for the same level of care in return (whether I get it or not is irrelevant). You all have a wonderful holiday season and a happy and productive new year to come!

Leave a Reply