How many retractions does a PI get?

I was sort of kicking the latest journal article retractions around with a colleague. The Hellinga case. The Buck retraction. A few more in each of our respective spheres.
The question came up.
How many retractions can a lab survive?
I think one is a given. There's the usual muddying of blame, accuse the postdoc or grad student, beg off on a mistake. For the first one.
What happens if there is a second one? Does it matter if it is 2 years apart or 15 years apart? Is that it, buh-bye, kiss of death? Or does it just depend on how good the excuse making is?

20 responses so far

  • Dan says:

    I think it depends on the nature of the retraction. Fraud? Or a control that someone overlooked?
    I started a post-doc that was based on work from the lab that had just been published in Science . I quickly figured out that the work was an artifact. While I could replicate critical results in the paper, I also included a control that the other lab members had not included which demonstrated the artifactual nature of the result. It was a control that was not obvious to other members of the lab because this bit of experimentation was outside the set of techniques that is typically done in the lab. (I had done my Ph.D. using this technique, so I looked at the problem a little differently.)
    No retraction has come out, and as I wasn't an author on that paper, I don't feel like I can push the issue. But if one were to come out, what should be thought of the lab?

  • PhysioProf says:

    It was a control that was not obvious to other members of the lab because this bit of experimentation was outside the set of techniques that is typically done in the lab.

    Have you read Young Female Scientist's latest post? She would not be surprised by your experience.
    In relation to the post itself, what about the Chang x-ray crysallography debacle. The dude had to retract every paper published by his young hot lab. People I know in the field believe that he will recover and be ok.

  • Andrea says:

    No retraction has come out, and as I wasn't an author on that paper, I don't feel like I can push the issue. But if one were to come out, what should be thought of the lab?
    Regardless of whether you were an author or not, if there is good reason to believe the results were an artifact, a retraction should come out (or if you have more data providing different conclusions, a separate counter-paper). It is not just a matter of keeping the published record clean, but there likely are people out there wasting work and writing grants (and getting them!) based on that mistake.
    It is still possible your PI believes the original data are correct, and your control experiment flawed in some respect - either way, you two should have a good discussion on this, and if this is indeed a clear case of artifact, should act on it.
    No one would think badly of the lab for publishing a retraction about an honest mistake, or certainly much less badly than they would if it becomes clear that your PI knew this was an artifact and just sat on it.

  • Lorax says:

    No retraction has come out, and as I wasn't an author on that paper, I don't feel like I can push the issue. But if one were to come out, what should be thought of the lab?
    Based on the limited information you give, I do not believe a retraction is in order. If we are retracting papers once new information reveals an experiment/set of conclusions are wrong, then every single paper ever written needs to be or will need to be retracted. You do an experiment that demonstrates previous work was incorrect, you publish it and move science forward, and yes it can be difficult publishing a negative result. Presumably it would be embedded with additional work looking at the scientific problem in other ways not a a stand alone paper.

  • Becca says:

    I understand both Andrea's and Lorax's perspectives on this one.
    Now that almost everything is digital, it'd be nice if a single flawed experiment could be indicated with an "authors note" if the lab that originally published it reported the artifact, and an "editors note" if it was some other lab that found the problem.
    From my perspective, such an 'authors note' would inspire immense respect for the labs that 'fessed up (and saved some poor hapless sod trying to repeat the results six months of her life).
    Sometimes you see these "semi-demi-hemi-retractions" on the editorials pages- somebody will write in to say "you did that experiment wrong, because xyz"... and once in a while the reply will be "upon checking xyz, we did the experiment wrong, but it still does not substantially alter bulk message of the paper, because abc".
    I'm not so sure how I'd feel about the PIs of papers that ended up with 'editors notes'- a couple of those wouldn't raise an eyebrow, but if there were a ton (and more than in the rest of whatever field that was) it would probably be a concern.

  • PhysioProf says:

    There are some prominent people in the field of behavioral genetics whose results in general are not trusted, due to well-documented systematic errors in their published work.

  • juniorprof says:

    From my perspective, such an 'authors note' would inspire immense respect for the labs that 'fessed up (and saved some poor hapless sod trying to repeat the results six months of her life).
    I like this idea. In pharmacology a consistent problem is that the interpretation of a given set of results will change with time due to lack of specificity of compounds. Often times, when you start using a compound you might assume a degree of specificity based on previous results but inevitably more data will show that these compounds have "off-target" effects that you never dreamed about that might have an influence on your previous interpretations. Note that in this instance, there is not a problem with the data but with the interpretation of the data (in other words, the discussion is now screwed). In many cases it would be nice to have such a mechanism for annotation of older papers because often times the off target effects make the work even more interesting (and unexpected).

  • juniorprof says:

    There are some prominent people in the field of behavioral genetics whose results in general are not trusted, due to well-documented systematic errors in their published work.
    This kinda shit really pisses me off. If you're in the field you know this stuff. If you're an outsider trying to jump up a new project based on this work you're fucked because you don't know any better. There ought to be a list somewhere that informs poor hapless saps like me of what to watch out for in reading the behavioral genetics lit.

  • Monisha Pasupathi says:

    Actually, a propos junior prof, not only does this suck for those people basing new projects on behavioral genetics, but for those poor saps like myself out there reading and teaching undergrads about it in the realm of developmental psychology, trying to prevent all kinds of potentially dangerous standard layperson fallacies about heritability and such.....
    Love this blog, by the way.

  • hm says:

    Some people just don't even retract at all. It's conceivable that Buck didn't have to say anything. It would have just taken years for someone to try to replicate the findings, publish a completely different result, and then move away from the false findings eventually. So, in that case, she would still have a number of 0 retractions and be seen as a demi-god.
    On the other hand, once Buck found out that this was a repeat offense, and that many components of her published findings may have been falsified, she would have had to retract everything. Rather than doing that, she put the alert to cover her ass AND that of her lab. It makes sense, because no real scientist would want to keep making retractions. So, I think 1 retraction would be bad but 2+ would be damning. She stopped the bleeding before it became a festering wound.

  • CC says:

    If you're in the field you know this stuff. If you're an outsider trying to jump up a new project based on this work you're fucked because you don't know any better.
    In fact, that's exactly how I spent much of my postdoc, caught between the Scylla of "sleepy inbred subfield where 80% of the stuff is wrong and they don't really care" and the Charybdis of "megalab where the PI orders you to read a few papers, maybe make some phone calls to iron out experimental details and then revolutionize a subfield you don't know the first thing about".
    IME, the smart, desperate postdoc, running a gauntlet of feedback at lab meetings from people who know even less about the subfield but are terrifyingly smart and critical, usually pulls it out, and I did land my C/N/S paper, and much of it has been subsequently replicated in sleepy subfield labs, more of it than they can usually replicate from their own publications.

  • PhysioProf says:

    once Buck found out that this was a repeat offense

    Evidence for this, please. Otherwise, please do not make libelous accusations on this blog.

  • hm says:

    PP, she said so herself. http://www.nature.com/nature/journal/v452/n7183/full/nature06819.html
    I think you did not read my comment carefully, which said, that to avoid having to make multiple retractions, you retract questionable things at one time. Note that the retraction is stated in the plural form, meaning it's not just one type of experiment that they couldn't reproduce.

  • hm says:

    Here's more evidence for you PP
    http://www.nytimes.com/2008/03/07/science/07retractw.html
    last sentence.

  • PhysioProf says:

    Dude, thanks for clarifying. I did misunderstand your point.

  • James F says:

    As Dan noted, it depends on the nature of the retraction. Francis Collins acted quickly when evidence of misconduct in his lab was brought to light, including a preliminary alert to other researchers in the field and making a personal visit to confront the author in question, Amitav Hajra, who had since left his lab. Three papers were retracted, and Collins's career and reputation were intact.

  • Neuro-conservative says:

    PP - can you give a couple more clues about the behavioral genetics people?

  • PhysioProf says:

    Nope. Not gonna get into it on blog.

  • blatnoi says:

    Well, in my field there was someone who retracted seven papers, three first then four six months later, and survived pretty well. It was all blamed on the graduate student author who was evil and stuff. They promised to do a review but never delivered on the promise and the guy is still a famous prof.
    It's a shame, because I was looking forward to the real dirt on the gossip that I heard that she slept with him and faked the data because she didn't want to disappoint him. Oh well, now I'll never know because the inquiry got buried.
    I can still pretend that the affair happened to myself as my own brand of punishment.

  • Criag says:

    Regarding the Hellinga case...if I was the grad student I would get back in the lab show them how to do the experiments (redo it all), then go to the other dudes lab knock on the door and do the same. I thought I remember negative controls when I read that paper...anyways. I am not convinced. Its not a perfect example, but sometimes highly skilled individuals who seemingly follow the same protocol have different outcomes, surgery is such an example. There is always an underlying cause, but many times it is not known. Regardless, given the complexity of even the most simple experiments I think it would have been wise to tap the breaks on that retraction.

Leave a Reply