NIH to crack down on violations of confidential peer review

Mar 30 2018 Published by under Fixing the NIH, NIH, NIH funding

Nakamura is quoted in a recent bit in Science by Jeffrey Brainard.

I'll get back to this later but for now consider it an open thread on your experiences. (Please leave off the specific naming unless the event got published somewhere.)

I have twice had other PIs tell me they reviewed my grant. I did not take it as any sort of quid pro quo beyond *maybe* a sort of "I wasn't the dick reviewer" sort of thing. In both cases I barely acknowledged and tried to move along. These were both scientists that I like both professionally and personally so I assume I already have some pro-them bias. Obviously the fact these people occurred on the review roster, and that they have certain expertise, made them top suspects in my mind anyway.

Updated:

“We hope that in the next few months we will have several cases” of violations that can be shared publicly, Nakamura told ScienceInsider. He said these cases are “rare, but it is very important that we make it even more rare.”

Naturally we wish to know how "rare" and what severity of violation he means.

Nakamura said. “There was an attempt to influence the outcome of the review,” he said. The effect on the outcome “was sufficiently ambiguous that we felt it was necessary to redo the reviews.”

Hmmm. "Ambiguous". I mean, if there is ever *any* contact from an applicant PI to a reviewer on the relevant panel it could be viewed as an attempt to influence outcome. Even an invitation to give a seminar or invitation to join a symposium panel proposal could be viewed as currying favor. Since one never knows how an implicit or explicit bias is formed, how would it ever be anything other than ambiguous? But if this is something clearly actionable by the NIH doesn't it imply some harder evidence? A clearer quid pro quo?

Nakamura also described the types of violations of confidentiality NIH has detected. They included “reciprocal favors,” he said, using a term that is generally understood to mean a favor offered by a grant applicant to a reviewer in exchange for a favorable evaluation of their proposal.

I have definitely heard a few third hand reports of this in the past. Backed up by a forwarded email* in at least one case. Wonder if it was one of these type of cases?

Applicants also learned the “initial scores” they received on a proposal, Nakamura said, and the names of the reviewers who had been assigned to their proposal before a review meeting took place.

I can imagine this happening** and it is so obviously wrong, even if it doesn't directly influence the outcome for that given grant. I can, however, see the latter rationale being used as self-excuse. Don't.

Nakamura said. “In the past year there has been an internal decision to pursue more cases and publicize them more.” He would not say what triggered the increased oversight, nor when NIH might release more details.

This is almost, but not quite, an admission that NIH is vaguely aware of a ground current of violations of the confidentiality of review. And that they also are aware that they have not pursued such cases as deeply as they should. So if any of you have ever notified an SRO of a violation and seen no apparent result, perhaps you should be heartened.

oh and one last thing:

In one case, Nakamura said, a scientific review officer—an NIH staff member who helps run a review panel—inappropriately changed the score that peer reviewers had given a proposal.

SROs and Program Officers may also have dirt on their hands. Terrifying prospect for any applicant. And I rush to say that I have always seen both SROs and POs that I have dealt with directly to be upstanding people trying to do their best to ensure fair treatment of grant applications. I may disagree with their approaches and priorities now and again but I've never had reason to suspect real venality. However. Let us not be too naive, eh?

_
*anyone bold enough to put this in email....well I would suspect this is chronic behavior from this person?

**we all want to bench race the process and demystify it for our friends. I can see many entirely well-intentioned reasons someone would want to tell their friend about the score ranges. Maybe even a sentiment that someone should be warned to request certain reviewers be excluded from reviewing their proposals in the future. But..... no. No, no, no. Do not do this.

29 responses so far

  • Joe says:

    A guy came up to me and told me that he knew I had reviewed his grant and then proceeded to tell me why his grant had not gotten the score it deserved. I have not had any one tell me they reviewed my grant.

    The Science article suggests the following remedy, "... that CSR require all review panel members to receive periodic training by watching online videos describing the rules..."
    CSR, please, please do not do that. My life as a faculty member is already filled with retraining videos.

  • drugmonkey says:

    Training videos and tut tut op/eds only reach the truly ignorant. Given the SRO spiel it is hard to imagine anyone with review experience misses this. Not helpful for the bad actors on section.

  • DNAman says:

    I think "reciprocal favors" is a very big issue that permeates academic science.

    It's not just at NIH grant review, but at all levels: manuscript reviews, job offers, awards, promotion letters, etc. All of these things rely on "peer review", but many times it ends up being "friend review". I can see how this spills over into NIH grant review.

  • Philapodia says:

    The people who intentionally violate the the sacred trust given to them by CSR would do it regardless of some stupid training video.

    Study section has all the tribalism as any other highly competitive endeavor. Hoping for some perfect utopia where none of this occurs will never work. Trying to keep cheating/backscratching to an acceptable level is probably the only way to go.

  • Ola says:

    I'd say over a 4 year stint as a standing study section member (3x/yr.) I was approached pre-review at least once a year by someone whose grant was up at the pending meeting. Different people each time. The pitch was usually "I really need this so can you fight for it", but in all cases I was not a named reviewer for the proposals in question, and usually they didn't make it into the discussion pile anyway. I never attempted to rescue such a proposal from triage - that would be a stupid move. My cellphone number is on my lab website, and in most cases these requests came via SMS.

    On the flipside, as a junior Asst. Prof. over a decade ago I was desperate to know about an A1 revision, so I asked (by SMS) for a study section member to send me a thumbs up/down during the meeting. They did so (it was good news). Helped me avoid a sleepless weekend.

    For obvious reasons, I think there's a moral/ethical difference between asking "how did my grant do?" and asking "please put a finger on the balance for me". Clearly the latter is problematic, the former less so.

  • drugmonkey says:

    Clearly the latter is problematic, the former less so.

    Yes but.... This is a violation of the rules too. and once we start in making our own personal decision about which of the rules to follow it gets slippery-slopish very quickly.

    I am guilty of this in spades given the nature of my blogging behavior over the years, of course. I have many times made reference to real grant review scenarios. I tend to abstract them. I tend to wait until something has occurred more than one time before talking about it. and I for sure try to keep away from enough detail to connect my points to specific grants that have been reviewed on panels in which I have participated. I mean, sure, I have thought carefully about the reviewer confidentiality instructions and have had numerous conversations with SROs about how much I can relate* to colleagues.but. for all I know, Nakamura might not like some of the specifics that I (and you all) get into on this blog.

    *In one case I had pretty clear instructions given to me about how to respond to an applicant seeking my advice after their grant was reviewed in a panel I was on. It amounted to "absolutely no specifics about their particular proposal or the review of it. But feel free to talk generally about review process, how this panel tends to emphasize things, etc". I guess I then generalized that to my blogging......

  • DNAman says:

    "On the flipside, as a junior Asst. Prof. over a decade ago I was desperate to know about an A1 revision, so I asked (by SMS) for a study section member to send me a thumbs up/down during the meeting"

    I wouldn't be surprised if things like this are ultimately the source of racial bias in NIH grant review.

    Sure, you didn't ask for a "finger on the scale", but more times then not you'll probably get a "finger on the scale" due to human nature. Racial minorities are less likely to have connections like you have with someone on the review panel, thus are less likely to have a "finger on the scale".

    The end result, when you examine the statistics, looks like African American scientists are getting lower scores than they should.

    I don't think I ever met a scientist that would score someone lower based on race, but I know dozens that would shade the score based on someone they know.

  • grumpy says:

    Is it generally accepted that COI include close friends?

    I would never share details about grant review with a "work friend" and would be really offended if one asked me to share details.

    But if I heard discussion about a proposal from my best friend from college or whatever it would be tempting. None of my close friends are in my field so this never actually comes up.

  • drugmonkey says:

    Is it generally accepted that COI include close friends?

    I think this is one of those grey areas that is impossible to police. Ultimately you are supposed to COI yourself from grant review if you think you can't be impartial, same with manuscripts. Everyone clearly thinks they are more impartial than they actually are (see all sorts of implicit bias experiments). In an excess of caution I once told an EIC that I maybe shouldn't review a paper from a very close science buddy and he told me "if that was the standard I would never get any manuscripts reviewed". And yes, he was fully aware of how well I knew this person.

    The rules for COI on direct scientific interest tend to have to do with active collaborations and mentoring relationships. In the past three years, generally. Obviously your affection for your brightest trainee lasts long past the time you stopped co-publishing. Obviously your collaborator doesn't revert to being some stranger after a magical 3 years past the last co-pub date.

    But look, some people are super gregarious and "know everybody" in the subfield. Act very friendly with many, many people. Should they have to recuse themselves from all reviewer duty? I do not know how we draw clear lines on this.

  • drugmonkey says:

    DNAMan-

    Absolutely excellent points. A lack of preference may end up being the source of the discrimination identified in Ginther. It's still discrimination.

  • shrew says:

    "But look, some people are super gregarious and "know everybody" in the subfield. Act very friendly with many, many people. Should they have to recuse themselves from all reviewer duty? I do not know how we draw clear lines on this."

    Oh man, you mean to say, if I just keep making friends, I can recuse myself from ever reviewing again?? Party at my place!

  • Almost tenured PI says:

    I know two PIs who overlapped in the same postdoc lab. They were best friends, and even though they're now at different institutions, their families still vacation together and they're inseparable when you see them at meetings. Despite working on basically the same thing, they purposefully do not publish together so that they can review each others grants and papers. Both are on study sections and both seem to always know exactly who reviewed their grant (and who to exclude next round). They are remarkably well funded.

    I would like to see people who worked in the same lab (or even department) at any time during their career be recused. That kind of day-to-day direct interaction makes it suspect as to whether one can ever be impartial.

    Same story with the makeup of the study section. People who trained in the same labs shouldn't be on study section together. Diversity of opinion is difficult to achieve when certain mega-groups are so vastly over-represented.

  • Philapodia says:

    @Almost Tenured PI

    I don't know if SROs can be expected to do detailed background checks on all study section participants to make sure that they didn't train in the same lab or department, or that they aren't friends. Scientific review officers are not private investigators, nor should they be. We're supposed to self-recuse, and I think most of us do just that since if the NIH detects a COI and decides to pursue it we can lose our funding and it also can have effects on other grants that are awarded to your institution. If there are concerns that you have about potential COIs, you should give that information (requesting anonymity) to the SRO or their bosses and let them investigate it. If CSR really cares about COI, then it will deal with the situation.

  • grumpy says:

    "I don't know if SROs can be expected to do detailed background checks on all study section participants ...We're supposed to self-recuse"

    exactly. But if the COI rules stated specifically that you should recuse if you worked in the same lab/dept, if you consider themselves a good friend, etc., then ppl would self recuse more often for the reasons you mention and because they don't want to look like fools when other panelists realize they broke the COI rules.

    The problem is that encouraging more conservative COI adherence could mean less ppl will be available to review, the overall review process will slow down/be more expensive, etc.

    My guess is that the side-effects of stricter COI adherence are worse than the few cases of fraud that slip through. However NIH is going to have to respond to this public case of fraud and may overcorrect to save face.

  • DNAman says:

    "I would like to see people who worked in the same lab (or even department) at any time during their career be recused. That kind of day-to-day direct interaction makes it suspect as to whether one can ever be impartial."

    I've seen this too. Another situation I've noticed is that students/alumni of a top-school in a certain country who end up in the US band together and trade favors. I think it might be cultural, to the extent that they don't even hide it. They just blatantly favor another person from that school/country. My observation is in hiring/promotion, but i'm sure it carries over to grant review.

  • drugmonkey says:

    Oxford or Cambridge?

  • drugmonkey says:

    you consider themselves a good friend, etc...when other panelists realize

    Does this get us past the catchall of “if you cannot be unbiased”?

    There’s a very fuzzy line about professional friends. It often doesn’t align with overlap in training location of pedigree. Hard to define and impossible for SRO to litigate in response to an accusation of favoritism.

    Especially given social media, amirite? There are several colleagues that would never appear to be more than distant professional acquaintances of mine (at best) without access to my pseud twitter DMs or email.

    It has to be self-election.

  • Curio says:

    I had a panelist straight up volunteer they read my grant and without prompting launched into a description of the discussion. I was so uncomfortable and flummoxed that I probably looked like an idiot but then again so did they. Meanwhile my program officer has dropped alarmingly revealing clues on the backgrounds of my reviewers that I felt sick to my stomach about my own past reviewing and potential veiled unmasking. So yeah, a crackdown of sorts is probably in order and certainly making sure that the rules are strictly understood is important.

  • drugmonkey says:

    panelist straight up volunteer they read my grant and without prompting launched into a description of the discussion.

    I think that this would be a good time to say "errr, I appreciate the sentiment but I don't think you should be telling me this".

    my program officer has dropped alarmingly revealing clues on the backgrounds of my reviewers that I felt sick to my stomach about my own past reviewing and potential veiled unmasking.

    Yep. Absolutely one of the risks of being honest and of speaking up during grant review. You may really anger a program officer and get on their bad side for..well anything from a round to a career. I absolutely one hundred percent do not know what advice to give people on this. I tend to be unable to be quiet when I think something needs to be said about a grant during discussion. I tend to value review robust grant discussion that is uncontaminated by career fears. But I also recognize that career fears are not unfounded.

  • AcademicLurker says:

    I'm aware of one instance in which a study section member gave any information about the review process to an applicant (not me). It was a study section that had acquired a bit of a reputation for being disfunctional, and the information was a straight "In my opinion your grant was reviewed unfairly". They probably should have stuck with bringing it up with the PO instead, but then maybe the PO was part of the problem.

  • drugmonkey says:

    What sort of thing gives a study section a reputation for being dysfunctional, AL?

  • AcademicLurker says:

    I don't know the details, my vague impression is that there was (this was some years ago) some sort of deep split in that particular field about methodology/models/whatever to the point that there were basically 2 camps, and that this was reflected in study section behavior. I've heard from several people, not just the one in the above anecdote, that they were explicitly warned not to target that SS, even though on paper it looked right for their proposals.

    As I said, this was a decade ago, hopefully things have been cleaned up since then.

  • drugmonkey says:

    there were basically 2 camps, and that this was reflected in study section behavior.

    This does not in and of itself mean a study section is "dysfunctional". The fact that one type of proposals do well and another type do not is in fact common to study sections. Even if this is not readily apparent from the description of the section on the CSR website.

    I'm curious how a section could be so outside of the norms of behavior as to be "dysfunctional".

    I would imagine one way to assess this is % fundable scores vs other sections but of course if it is a standing section, scores are percentiled to take care of this.

  • qaz says:

    I'm not sure what would make a study section dysfunctional, but there are a few situations that I know have occurred. If you want to call them dysfunctional, you can.

    1. Not all study sections are percentiled. There are a lot of processes (Txx grants, for example) that are not percentiled and are treated as base scores. SEPs are percentiled across all SEPs, which means that differences in scoring ranges between specific SEPs can be meaningful. I think some of the new U study sections were not percentiled (but I'm not sure).

    2. Certainly, there are study sections that are dysfunctional for specific fields, where, for example, the study section is supposed to include bunny hopping as well as other aspects of animal motion, but the bunny hopping people are few and the rest of the standing members think bunnies are boring. So you can find situations where CSR assigns all bunny hopping to a study section but that study section hates bunny hopping. (There are ways to get CSR to assign you elsewhere, but that can take serious grantsmanship game.)

    3. There was one study section I know where they merged two study sections from completely different fields - think bunny hopping and astrophysics - and the different fields thought the other uncontrolled and bad science, so the reviewers spent all their time asking for meaningless controls and scores were all over the map. (It completely depended on whether your bunny hopping grant got astrophysics reviewers.) My colleagues on that study section warned us away from it. (Not revealing review, just saying, try to get a different study section for your grant.) I don't know if that counts as dysfunctional.

    4. I know that treatment of new professors used to vary a lot across study sections. I remember one person talking to me after being ad-hoc on a study section talking in real shock about a certain field as "eating its young". I have also seen study sections where people from field A would support anyone doing anything from field A but trash anyone doing anything else, which very much was a conflict of interest. After a few cycles of this, people in the other fields started complaining (during the review discussion).

    I don't know if any of these are dysfunctional per se.

    In terms of the kind of thing that you're talking about (like communicating outside study section), I certainly have never seen any study section behave badly as a whole that way. I have seen individuals do that (and shut them down when they did), but not study sections.

  • AcademicLurker says:

    Qaz's point 4 is my impression of what was going on. But again, that's 3rd hand speculation on my part.

    I do think that if a study section's purview is described as "Mechanistic investigations of bunny hopping at the molecular level" and yet new investigators in the molecular bunny hopping field are being warned not to send proposals there, that in itself is pretty good evidence for some sort of dysfunction.

  • drugmonkey says:

    The problem is, AL, that disappointed applicants almost always start yelling about how the review of their grant was biased, error ridden and (apparently) dysfunctional. Crapping on junior PIs is very common. So how can it be “dysfunctional” if it is a common review outcome across many, many study sections? Is it a matter of extreme? Likewise, if a section has evolved to not treat certain approaches or topics well....can we not accuse other sections of this mismatch? We often say on this blog to check the grants funded through a given section to know the actual fit for your work.

  • drugmonkey says:

    Shorter: I *disagree* with the way many study sections review. But I don’t find them to be dysfunctional- grants get fundable scores and many quirks that I dislike are consistent ones.

  • AcademicLurker says:

    Fair enough. But if a study section has evolved to the point that there's a wild mismatch between its behavior and its stated mission - if the bunny hopping study section has been taken over by astrophysicists or whatever - isn't that a problem? It's a problem that, as you say, can be worked around by doing some homework on Reporter, but I still think it's a sign that something isn't working the way it's supposed to.

  • qaz says:

    It's not clear what "the way it's supposed to be" is. In my experience, all study sections (even the ones with people who like their fields better than other fields) are made up of qualified scientists trying to do as fairly as possible what is essentially a nearly impossible job.

    I always come back to my first experience on study section, where I came in with a chip on my shoulder (as a junior PI who had been beaten up badly and was grumpy about how unfair study section was), saw the reality of study section, and had to go back and reread all those reviews with a different eye.

    Study sections, particularly standing ones, have histories, styles, and quirks. Knowing which study sections are likely to find *your* questions exciting while at the same time not asking for the impossible controls is a key part of the grant game.

    At this point, the only real solution is that our intrepid junior PI needs to learn grantsmanship game. Where do they learn that? I learned a lot of it by reading this cool blog I found online called "DrugMonkey"....

Leave a Reply