Repeat after me: "I do not know who the reviewers were"

Sep 08 2008 Published by under Careerism, Grant Review, NIH, Peer Review, Tribe of Science

One of the favorite timewasters of authors and grant applicants alike is the discussion of who the people writing the reviews were and the obligatory questioning their scientific accomplishments, reading comprehension, acumen and parentage. Done right, this can be a harmless venting of spleen that is necessary before grappling with one's response to the substantive criticisms that have been advanced.
I was shooting the breeze with a colleague the other day and we were chuckling over the hilarious cases we'd run across where we had certain knowledge that the complainer was on the wrong track. They added up. All of a sudden it isn't quite so funny anymore.


I've alluded in prior posts to a certain nasty trend we have been observing wherein senior scientists (almost exclusively) who have not served on study section in (quite) a while, who have never had a grant triaged until recently and who may have only rarely failed to acquire funding when they wrote an application (again, until recently)...vent. And vent in the direction of all these "assistant professors nobody's ever heard of on study section killing my grant. This last one was definitely JuniorMint and s/he wrote all this stupid stuff and clearly has no idea what s/he is talking about and is just out to get me!"
You think I am joking? I am not. This is actually the cleaned-up version of the rant (which can go on for a good thirty minutes if you aren't careful). As is my constant theme to the younger investigators, do you think these CSR/NIH memes come from nowhere? This nonsense has gained the ear of the brass at NIH and CSR which is why we have been hearing so much hot air about the evil effects of too many Assistant Professors on study section and various attempts to DoSomething about this *UnbelievablyCriticalProblem!
Of course, nobody knows for sure who the reviewers were. Yes, you can make some good guesses but you are frequently going to be wrong, quite wrong. I have endless stories from study section of grants where what seems to be the most-appropriate reviewer wasn't on an app. Stories in which someone you'd never think of being on an app had some ancient or covert expertise that is not part of their current reputation. Colleagues who review on other panels relate the same story.
Peer review of papers? C'mon. Everyone has received some papers to review for which they are clearly not in the most obvious category. If you happen to know some editors or serve on one of those Editorial Boards (meaning you get all they stuff they have trouble getting reviewers for) you know what I'm saying. Why do people forget this when they are gaming the identity of reviewers?
soapbox150.jpgIf you are wondering at this point why I keep beating this particular drum it is because I keep running across fresh examples. It is an unending supply it seems. The grant-review complaints tick me off personally because I am on study section, have been for a number of rounds now and people spout all kinds of stuff (with great confidence and authoritah!) that is completely inconsistent with my experiences. So I feel as though the enterprise that I am involved with (I think in good faith) is under attack from a position of incredible ignorance, number one. The larger point is that I think one of the structural limitations to the NIH grant review system that most perpetuates cronyism, conservatism in approach and the DismalPlightofJuniorInvestigators is the preponderance of older investigators on panels.
Today's point, however, is the bullying.
That's right, bullying.
It is absolutely going on and it is absolutely corrosive behavior.
Senior folks in the field whinging to each other about how a specific junior member of their field "killed their grant" is bullshit. They should be ashamed of themselves.
First, because they are so frequently wrong about who reviewed their grant. As I mentioned at the outset, my colleague and I were swapping stories of first person evidence. That is, when someone was whinging to us about a review where we had direct personal knowledge that the person being accused either did not review the grant/paper in question or that said person did not levy the offending criticism. My point is that in a significant number of cases, some poor innocent could all of a sudden be facing hostile actions from increasingly critical reviews of their own work/proposals, to adversarial questions at conference presentations to passive aggressive refusals to consider meritorious people for seminars or platform talks at meetings. All without having actually made the critique.
Second, it is only slightly better if the junior investigator in question actually was the critical reviewer. For all of the obvious reasons that using a position of authoritah! to avoid or deflect legitimate critique is bad for science and for science careers. For the unfairness of thinking that one is not subject to the rules of one's business that apply to others. For trying to subvert one of the more productive principles of the scientific enterprise in particular, i.e., that what is most important is the contentious marketplace of fact and interpretation rather than the corporate boardroom of monopolistic hegemony. (Did that work? Probably not. ahh, I know PP will get a conniption so I'll just leave that in)
Third, is the fact that broadcasting your whiny complaints about either a specific reviewer or a category (like Assistant Professors) is intended to be a bullying tactic of intimidation so that the powerless will think twice before messing with your exalted proposals again! Not to mention those proposals of other people in your power category. Bullying.
As usual, despite my cynical outlook, I have some recommendations. Not that you should be stupid about it and throw yourself into the chasm on principle. Some of these senior investigators are very nasty pieces of vengeful work indeed. I am not naive. Still, my exhortation is: Don't tolerate this crap. Don't let it pass unchallenged. Ask how the whiner knows, really knows, that they have identified the right reviewer. Gently remind them of the huge pile of very good grants that the reviewer is sorting. Commiserate with your triage stats and talk about how NIH memes on the New Investigator show how the bias is still overwhelmingly for the senior investigators. Etc. Do what you can, at the very least, not to validate the whining.
It is much like distasteful and unscientific views spouted about women or minorities in science, the goal is to get the otherwise-well-meaning to consider what they are saying and the basis for their (unfounded) beliefs. To get the recalcitrant to diminish their spouting in public and perhaps, gradually, to convince them as a bloc of the unfairness of their anti-junior investigator bigotry.
__
*I can't help it. I've said it before and I will continue to do so. The numbers don't add up. Assistant Professors have been at worst 10-12% of reviewers. They are much more frequently ad hoc meaning any given individual has fewer grants to review per round and contributes to many fewer rounds. With three reviewers per application, the trend for n00b reviewers to be slightly more deferential to the rest of the panel and more deferential to senior figures, well, the impact of "those Assistant Professors" on the outcome of your particular application is pretty slim. I will also remind that the EvilAssistantProfessors on review panels frequently have to have already secured a grant before being invited. So that means they are unlikely to be raw, 1st - 3rd year Assistant Profs and more likely to be only a year or two away from the tenure decision.

20 responses so far

  • Dude says:

    Dude. I vote post the most difficult to read ... e v e r .

  • DrugMonkey says:

    whassamatter? Not enough YELLING? Or do I need to take lessons from this guy?
    Help me out here Dude...

  • neurolover says:

    You're talking about the breakdown of peer review in in times of famine. The only solution I see is less complete reliance on peer review. Peer review is fabulous, because only one's peers actually understand what the work is about. It's beautiful when it works, because it really lets your work be evaluated by experts, who can look at it objectively, without the bias that we bring into our own work.
    But, it's also terrible because peers are also your competitors. In times of plenty, that balance works out robustly in favor of peer review. In times of famine, the competitive edge, people's ability to be unbiased and objective becomes much more difficult.
    The only solution I see is to bring another layer of oversight by folks who have nothing to gain or loose based on the outcome of the decision. But, I'm not sure who those people might be in the case of grants. In the case of manuscripts, I assign that job to the editors.

  • DrugMonkey says:

    This smells suspiciously like a learned-helplessness response neurolover. Social memes and beliefs persist because they are actively fostered and go unchallenged. Challenging such beliefs can change them. The reference to gender and ethnicity based bias is perhaps a bit forced and over the top but the principles are similar. Yes, bias and bigotry can be addressed by top-down and authoritarian structures but they can also be ameliorated by grass roots changes. I might even argue that until the masses shift the social acceptability of a given belief, top-down mechanisms are going to be quite limited in effect.

  • C says:

    Unless I'm misunderstanding something, DM is talking about people blaming the wrong people for not getting funded, and the problems that occur when there's a power-differential between the accusers & the accused. How is this an example of the breakdown of peer review?

  • neurolover says:

    True, my comment does have a whiff of learned helplessness about it, and I don't object to challenging the assumptions (as you are trying to do).
    I do also believe, however, that cooperative systems like peer review are prone to disintegrating, when the perception of fairness and true cooperativity degrades (and it's a fast process). You're describing perceived unfairness by the people who power the system. Once that happens, they can justify their own unfairness (the bullying you describe.
    Adversarial systems start out with a different premise, and decision making by those who are not your competitors, and presume that participants in the competition will be biased (i.e. advocates). We love our cooperative system in science And, rightly so, because when it works, it's so much better than an adversarial system. To work, though it has to be both fair, and perceived as fair. I think we're in a danger zone right now. The danger isn't just that people won't be evaluated fairly, it's also that they won't believe that they have been evaluated fairly -- meaning that they have no reason to believe honest criticism (what your senior investigators are doing).
    I don't have sufficient institutional memory to really argue that this is a *new* danger zone -- we may have been here before many times in the past; it could be a cyclical phenomenon that rights itself as conditions improve. But, I'm not sure.

  • neurolover says:

    perhaps my simul-post explains why I think it's a break down of peer review? In shorter terms, I think peer review depends on both fairness (which DM says is there) and perception of fairness (which DM is describing senior folks questioning). Even if we have the former, without the latter, cooperativity will break down and the system will become unfair.

  • DrugMonkey says:

    Adversarial systems start out with a different premise, and decision making by those who are not your competitors, and presume that participants in the competition will be biased
    An interesting distinction. I will have to mull this over. On first consideration I think peer review has elements of adversarial and cooperative systems. Perhaps what makes things so frustrating is that it is a system that cannot make up its mind which it is?
    The danger isn't just that people won't be evaluated fairly, it's also that they won't believe that they have been evaluated fairly -- meaning that they have no reason to believe honest criticism (what your senior investigators are doing).
    Very well put. I do think the echo chamber of "something is WRONG with review" drives your last point in an even faster circle. I guess on some level I hope my comments in this area can maybe slow things down a bit.

  • Odyssey says:

    DM wrote:
    Peer review of papers? C'mon. Everyone has received some papers to review for which they are clearly not in the most obvious category. If you happen to know some editors or serve on one of those Editorial Boards (meaning you get all they stuff they have trouble getting reviewers for) you know what I'm saying. Why do people forget this when they are gaming the identity of reviewers?
    About ten years ago, the editor of a journal in my field told me that he often has authors approach him at meetings telling him they know exactly who reviewed their manuscript(s). He went on to say that 95% of the time they're wrong. I suspect he was right and that that percentage really hasn't changed.

  • dude2, actually; not dude says:

    Um, the first post from dude above is not the same person you referenced (now I'm dude2). But nice try.

  • I am just as guilty as the next hot skirt for bad-mouthing my manuscript reviewers when they don't bend over backwards to praise me. I have been known, on occasion, to suggest that they might have been engaging in sexual activities with barnyard animals while reviewing my stuff. However, this venting usually goes on for five minutes or so and I am back in the saddle (but not in the saddle of the animal they were performing the act with).
    You see, what frustrates me most about a reviewers comments is not that they hate my work, but that I obviously did not convey my message and science clearly enough the first time. And that's largely the source of my anger...my own frustration with myself. On the other hand, I think what you point out here is a genuine problem. The idea that people "blame" others for "killing" their work. In the overall scheme of things, I have no solution for you. In the short scheme, I also have no tolerance for bullshit and politics in science and refuse to listen to whining. And perhaps that's what we need? An editorial in Science or Nature, or one of those other fancy-pants journals, from the most foul-mouthed person I know telling the whiners to "sack the fuck up."

  • PhysioProf says:

    I agree with DM and those who have also stated that (1) it is almost impossible to know for sure who a reviewer is and (2) that scientists need to sack the fuck up and accept that genuinely biased peer review is extremely uncommon. What *is*, however, all too common is the pathological inability of a manuscript reviewer to suggest acceptance of a manuscript without indicating that at least one additional experiment is required.

  • Becca says:

    What you say about the "echochamber" reinforcing the notion that "something is WRONG with review" is very interesting.
    It seems to me that you could make the same argument about anything people regularly complain about. You could just as easily replace "peer review" with "conflict of interest policies", "NIH grants", "training of the next generation of scientists", "attitudes toward outgroups" or even "the system".
    The questions ultimately become 1) how do you pick your battles? and 2) how do you fight your battles without excessive collateral damage?
    When does the system need to be questioned, because all systems do have flaws, and when does the system need to be praised, because we need to cooperate to Get Stuff Done?
    It's not straightforward.

  • juniorprof says:

    What *is*, however, all too common is the pathological inability of a manuscript reviewer to suggest acceptance of a manuscript without indicating that at least one additional experiment is required.
    This is indeed a huge problem. We need to remember that science is a process. I have been able to argue my way out of "doing one more experiment" on multiple occasions, presumably with the revision going back to the same reviewer. Its quite liberating to put your foot down and say enough.

  • DrugMonkey says:

    It seems to me that you could make the same argument about anything people regularly complain about.
    Indeed. If the point were only about tactics. I like to think that the point is also about objective reality. We can move beyond mere complaining by introducing some evidence and a rational argument. So in the case of junior investigators, we can ask how their scores and funding rates stack up, look at historical trends, etc. Those data are available. We can then ask questions about what this means for the future and discuss things in a rational matter.
    In the case of older types whining about the younger reviewers who are acing their proposals, we have a startling lack of evidence. Are they as a group getting poorer scores and fewer grants funded? Are they doing better or worse than the current averages? What analysis of reviewer behavior can they point to that gives credence to their hypotheses? (Here I will admit that it is very difficult to test because career status of the reviewer is bound up with experience actually conducting reviews) What rationales do they advance that suggest that biasing for them is a good idea?
    The point being that I never, ever hear a good argument. At best some lame opinion that jr reviewers are harsh. Which is hard to credit because my experience on my study section is not at all consistent with this position. Mostly I get a lot of stammering and avoidance and handwaving. When I do get the argument about "In my direct experience jr reviewers are harsh" it turns out to be the two Assistant Professors they happen to have in their home department doing review games. (ok, I exaggerate. but not much.) For new readers, I've been on this theme since the beginning.
    1) how do you pick your battles? and 2) how do you fight your battles without excessive collateral damage?
    I try to be as clear as possible when I exhort people to challenge the system in some way that it is imperative to be smart about it. Also to think hard about what costs you are willing to risk, what costs you are likely to incur at your career stage, etc. I also try to be clear that I recognize that quite a lot of my behavior is an established trait and that I may be recommending behavior that is easy for me but very hard for others (not you, of course, becca).
    I think the very first principle is to have a firm grasp of what time it is in career-space, what is happening with everyone else, what your capabilities and track record say about you, etc. This allows you to make informed criticisms without coming across as being personally whiny. After all, this is the easiest dismissal for a greybeard to make. To assume that you just aren't good enough.
    For example, I feel much more confident in saying , "Sorry, but no Dr. Bluehair you do not write better proposals than Dr. Yun Gun" and other such opinions after having sat on study section and reviewed a lot of proposals from the senior, middle, junior investigators, from big institutes, small institutes, wtf? institutes, etc
    When I was in early career, just submitting my own grants, I really didn't know if I was getting beatup just because I sucked or what. So I can bet I never said the kind of stuff I say now.
    Of course, I didn't read up on all the funding and review stat stuff back then either. So I'd say that being familiar with the various reports, datasets and findings provided by CSR can substitute in certain arguments and in any case is highly supportive to any anecdote-based argument.
    Second, and I know Zuska would crawl all up on me for this, I think it is usually best to skirmish. To make your comments consistently, as skillfully as possible but not to engage in toe-to-toe warfare with your PI or Chair or something. The angry routine has a place but it has a way of interfering with gettin' er done.
    http://drugmonkey.wordpress.com/2007/06/13/eye-on-the-prize/
    Some people really don't mind a good vigorous debate but everyone has a point of no return. I suspect it is easier than you think to assess the threshold when you are discussing this type of stuff. When BigCheez is practically popping a vein one hour after receiving his summary statement, well, that's not the time to get into it. Wait a month or so 🙂

  • Becca says:

    but but but but... if we forbid Distinguished Senior Professors from complaining based on the measly concerns of things like "objective reality" then they would never have anything to complain about! They have 'tenure, a grant, and five grad students (at least three of whom are Chinese)', life is perfect, right?
    Seriously, since when has Objective Reality ever stopped anybody from complaining? Well, except me, because I-am-so-very-rational. But, alas, not everyone is like me.
    And we all know that what's hard for me is *SITTING ON IT* when it is wise and prudent to do so. Speaking up is not, historically, my primary obstacle. I do understand that others are not so *ahem* "comfortable expressing themselves" (read: subject to a bad case of "verbal diarrhea" as one of my ex-PIs so poetically put it).

  • MissPrism says:

    You know, when I first read this I agreed with you, but I just got my first set of comments back and I am ABSOLUTELY POSITIVELY 100% SURE that the reviewers were:
    1. Puddleglum the Marsh-Wiggle
    2. That guy who writes his name on all the food items in the shared fridge
    3. A Labrador retriever on Prozac.
    4. My secret admirer
    5. An evil robot with laser eyes.

  • Pascale says:

    The biggest problem with peer review is the famine of funding. Given the quality of grants I have reviewed, the top 20% are excellent and outstanding. I may quibble with some of the stuff, but, in general, the science is interesting and I see no fatal flaws. In the best of all worlds, all of these proposals would be funded.
    The problem then is that reviewers are having to "differentiate" proposals that really aren't different in quality. Opinion and cheerleading thus become important components of the process, and one reviewer's focus on some trivial component is enough to shoot something down. Reputation of a senior investigator is often the difference between a fundable and a merely respectable score (or between discussion and triage).
    The way I see it, midlevel PIs are the group at risk right now. New PIs still get some extra points for being junior, and senior investigators with mega-factory-labs can survive with one less grant.

  • DrugMonkey says:

    The way I see it, midlevel PIs are the group at risk right now.
    You and drdrA and friends are going to keep this up until I write a smackdown, aren't you? 🙂

  • qaz says:

    The problem then is that reviewers are having to "differentiate" proposals that really aren't different in quality. Opinion and cheerleading thus become important components of the process, and one reviewer's focus on some trivial component is enough to shoot something down.
    This is an interesting problem with the new system. It's no longer one "reviewer", but just one "member of the study section". In my recent experience, the new system tended to have a lot more single range scores (where all three reviewers agree that the score should be, say, "2"). NIH is trying to say that reviewers can't distinguish between all the grants that get a 2, but then they average the votes of all the people on the study section. So if one person [not a reviewer] (out of a study section of 20) votes out of range (say giving it a 3), then the average is now 2.05 and that grant is now higher than all the other "2" scores. In the old system, averaging diminished the effect of one wacky reviewer. Not so much in the new.

Leave a Reply