What does a retracted paper mean?

crossposting from Scienceblogs.

I've been having a little Twitt discussion with Retraction Watch honcho @ivanoransky over a recent post in which they discuss whether a failure to replicate a result justifies a retraction.

Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud. Per usual, we boiled it down to a fundamental disagreement over connotation. What it means to the average person to see that a paper is retracted.

I rely upon my usual solution, DearReader. Select all choices that apply when you see a retraction or that you think should induce a retraction.


Direct link to the poll in case you can't see it.

My position can be found after the jump....

I think we need to be exceptionally clear in the business of science that a failure to replicate is not, in fact, evidence of fraud.

I don't give a fig what any journals might wish to enact as a policy to overcompensate for their failures of the past.

In my view, a correction suffices (and yes, we need to concentrate on making sure that any search engine that lands upon official mention, especially PubMed and the official journal site, makes it clear that the paper was in fact corrected) in most cases where there is not fraud.

Retraction, to me, implies that there is reasonable evidence of some sort of shenanigans.

Related reading: Being later found to be right does not reduce your blame for faking it in the first place.

16 responses so far

  • Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud.

    Dude, what the fucken fucke is with you and your phantom of the motherfucken opera sentence structures? This shitte is 100% fucken unintelligible.

  • whimple says:

    Statistics tells you that failure to replicate is expected. If you report to P=0.05, that means there's a 5% chance your result is bogus, through no fault of your own. Likewise, publication bias ensures that a large percentage of published work will be these statistical anomalies. Very disturbing that someone writing about science would miss this fundamental concept.

  • drugmonkey says:

    CPP, as one of my longest term readers there is absolutely no reason that you are not entirely familiar with my rather considerable limitations as a writer....

  • Hope springs eternal!

  • anonymous postdoc says:

    See whimple's point. I'm completely shocked there's much debate here. There will always be false positives, even in randomized controlled experiments (see, e.g., http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/). I do computational work with observational data, and we expect to be wrong even more often: we're guessing at the underlying model in addition to specific parameter values. We try to find robust patterns that we can robustly replicate, but due to a dearth of quality observations, our models will almost always require revising. This means that they might not work on another set of observations, i.e., they are not strictly replicable. That means the model was misspecified, but that's the nature of studying complex processes and settling for a finite number of observations.

  • Markk says:

    Speaking as someone who is not a professional scientist, retracting absolutely implies wrongdoing to me. Science journals are communication locations for research. It is absolutely certain there will be wrong things that must be corrected by new papers and correction notes.

    Retraction to me means abuse of the scientific process of doing research either deliberately or through bad process. If I am interviewing a scientist for a job (which I have done a couple times) if there was a Retraction of something that would definitely be a black mark, and would need a good explanation which I would ask for if I knew about it. I would immediately remove the person from consideration if I found out about it not from them and they didn't mention it.

    A correction of a paper or even more than one, might actually be a positive in my book - the person is working on some subject where there is a lot of interest and back and forth. Of course really bad work as the reason would still be bad, but just a correction by itself is not a bad thing to me.

    A journal retracting a bunch of stuff, just because it was wrong would make me consider it a low class journal not worthy of notice. It would imply something bad about the editors to me. Not necessarily the contributors, but perhaps. People did the work in good faith and it was wrong. That is still good communication of the what people are doing in a field.

  • Pinko Punko says:

    Retractions only mean what you think because the journal retraction notices are so shitty. I would suggest some percent of retractions are because analyses were faulty or reagents were screwed up (whoops! HeLa cells! Whoops! antibody specificity totally wrong!).

    Whimple's statistics seem kind of simple. What if the results are "this strain of yeast does not grow on this plate" and that strain of yeast DOES grow on that plate. That is not a statistical result, yet it could be wrong, and if wrong (contamination, incorrect genotyping), result should be retracted. And journal editors should write retraction notices that allow retracted papers not to read like they imply FRAUD FRAUD FRAUD. This way, douchefuckkes who publish shittie C/N/S papers that "everyone knows" is wrong will not get the cachet on their fuckken cvs. To coin a phrase.

    And then maybe Glamour mag science could improve in certain instances.

  • qaz says:

    The problem is that we have conflated two issues in the scientific literature. Call it the Science and Engineering sides or the Basic and Translational sides or maybe "what it is" and "what we should do".

    The first side (Science/Basic/"what it is") is a historical record of our best attempts at figuring out how the world works. In this first side, retraction should only be for mistakes. (It would be nice to distinguish the cause of the retraction to differentiate fraud and accidental mistakes, but this retraction should only be for stuff like "we discovered we'd mislabeled our chemicals, sorry guys!".) Retraction in the historical record should never be used to correct a valid attempt at scientific discovery. Any scientist is expected to track down the latest data and work from that. No one "retracted" Ptolemy in response to Copernicus or Newton after Einstein.

    The second side (Engineering/Translation/"what we should do") is our best guess at what one should do given the latest data from the first side. Here we definitely want to "correct" the literature. I'm not sure "retraction" is the right term, but we want to have an evolving document that sets out our best belief in what to do. Right now, the medical community is using publication in the scientific literature for this. I'm not sure it's the best mechanism for this.

  • whimple says:

    Pinko: What if the results are "this strain of yeast does not grow on this plate" and that strain of yeast DOES grow on that plate. That is not a statistical result...

    Yes, it is. The difference is that it is a statistical result with such high confidence that the probability the observation is incorrect based on chance alone is dwarfed by the probability that if the observation is incorrect it is due to flawed methodology (including the possibility of fraud).

    My personal view is that retractions should be reserved primarily for fraud and for egregious methodological errors. The retraction should also specify exactly what the problem was.

  • anon says:

    For once, I agree with whimple. DM, the poll you have is too 'black-and-white'. There can be many reasons why one lab cannot replicate a published finding from another lab, it happens all the time, and it likely has little to do with fraud. I am aware of one situation where the original investigator was asked to replicate their own finding, except this time, the investigator was blinded to the reagents used in each condition. This person could not replicate their own result. In this case, it was the SAME person performing the SAME experiment in the SAME lab with original reagents. The fact that this person could not replicate their own work could be due to fabrication of the original finding, or bias in interpreting the results. Fraud/fabrication is in many cases, extremely difficult to prove - we've seen that investigations of accusations like this can take years. The P.I. in this situation opted to publish an erratum instead of a retraction.

  • drugmonkey says:

    Anon, you are agreeing with my position in this.

    Pinko, you are cracking an interesting can of worms....that dodgy figure that pushes it over the threshold of S/N acceptance....even if later there is an erratum it was totally worth it. Hell, even full retractions take a few years- plenty of time to land that job or get that grant. Just apologize later. You still got yours. Eh?

  • Pinko Punko says:


    That can was exactly was what I was cracking. The benefits of the possible publication are seemingly worth sloppiness for a lot of labs because for many labs what are the consequences? They pollute the field, and increase the secret knowledge internal to their lab that their own papers are less than stellar, giving them a leg up on their followups, because they'll know what is what and nobody else will, save for "grapevine" effects.

    For example, two Science papers from one lab where initial Science paper "unequivocally" places complex in pathway downstream based on deletion phenotypes of pathway components, then second Science paper says "actually it is THIS way" and one particular strain has the opposite phenotype, allowing the effect to be mapped further upstream in the pathway. Good times.

  • whimple says:

    I love this kind of crap. There's a guy in my former field we used to semi-openly mock because he would publish big-journal paper 1: we just found this zippy new thing! followed later by big-journal paper 2: although the field believes zippy, surprisingly we have discovered that other-zippy is really the case!!!

  • [...] says (quote):  (cross-posted at Scientopia here – hmmpf isn’t that an example of redundant publication?) “I don’t [...]

  • katolab says:


    Alleged image fraud by Kato lab at the University of Tokyo in Japan
    Research misconduct? Fabrication? Falsification? Unintentional and inadvertent mistake? Coincidental similarity? Shigeaki Kato laboratory : Institute of Molecular and Cellular Biosciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-0032, Japan.

  • katolab says:


    Alleged image fraud at MD Anderson Cancer Center

    Research misconduct? Image duplication? Fabrication? Falsification? Unintentional and inadvertent mistake? Coincidental similarity? Cytokine Research Laboratory, Department of Experimental Therapeutics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.

Leave a Reply