The Journal Ban Hammer: Nastier implications

Mar 02 2015 Published by under Ethics, Science Ethics, Scientific Publication

There is one thing that concerns me about the Journal of Neuroscience banning three authors from future submission in the wake of a paper retraction.

One reason you might seek to get harsh with some authors is if they have a track record of corrigenda and errata supplied to correct mistakes in their papers. This kind of pattern would support the idea that they are pursuing an intentional strategy of sloppiness to beat other competitors to the punch and/or just don't really give a care about good science. A Journal might think either "Ok, but not in our Journal, chumpos" or "Apparently we need to do something to get their attention in a serious way".

There is another reason that is a bit worrisome.

One of the issues I struggle with is the whisper campaign about chronic data fakers. "You just can't trust anything from that lab". "Everyone knows they fake their data."

I have heard these comments frequently in my career.

On the one hand, I am a big believer in innocent-until-proven-guilty and therefore this kind of crap is totally out of bounds. If you have evidence of fraud, present it. If not, shut the hell up. It is far to easy to assassinate someone's character unfairly and we should not encourage this for a second.

Right?

I can't find anything on PubMed that is associated with the last two authors of this paper in combination with erratum or corrigendum as keywords. So, there is no (public) track record of sloppiness and therefore there should be no thought of having to bring a chronic offender to task.

On the other hand, there is a lot of undetected and unproven fraud in science. Just review the ORI notices and you can see just how long it takes to bust the scientists who were ultimately proved to be fraudsters. The public revelation of fraud to the world of science can be many years after someone first noticed a problem with a published paper. You also can see that convicted fraudsters have quite often continued to publish additional fraudulent papers (and win grants on fraudulent data) for years after they are first accused.

I am morally certain that I know at least one chronic fraudster who has, to date, kept one step ahead of the long short and ineffectual arm of the ORI law despite formal investigation. There was also a very curious case I discussed for which there were insider whispers of fraud and yet no findings that I have seen yet.

This is very frustrating. While data faking is a very high risk behavior, it is also a high reward behavior. And the risks are not inevitable. Some people get away with it.

I can see how it would be very tempting to enact a harsh penalty on an otherwise mild pretext for those authors that you suspected of being chronic fraudsters.

But I still don't see how we can reasonably support doing so, if there is no evidence of misconduct other than the rumor mill.

5 responses so far

  • Ola says:

    Here's a fun game - pick 5 random journals and see how long it takes you to find their "ethics" person on their website. Now go do the same for 5 random universities and try to locate a valid email for the research integrity officer.

    What this game underscores is that the guidelines for how these matters are handled, are all over the fucking place. The way that interactions between the different parties involved - author / journal / institution - play out, is just massively inconsistent. Everyone assumes that once you dear reader report something dodgy, there's a uniform set of rules that everyone plays by to fix it. Absolutely not so. It's got more to do with who blew the vice dean for research last night, than any kind of formula being adhered to.

    Attempts to improve this are just a waste of space (yes COPE I'm talking about you). The lack of leadership by both NIH and ORI, is where the blame for this situation can be squarely laid.

  • drugmonkey says:

    Guidelines are not as important as will to donsimethingnabout them. The will is what is usually in short supply.

  • . says:

    Have you ever reported fraud to your institution? The COI of the administrators overseeing the inquiry and investigation is rampant (Chairs and Deans who depend on IDCs). The institutions have every incentive to look the other way, as NIH and ORI will do nothing in 99% of the cases, and if they do anything it will take years.

  • Laurent says:

    This made me think about this point:

    Here we discuss frauds (data fabrication): the amount of fake results published discovered and the amount uncovered yet.

    Has there been any 'fake accusation' report documented? that is, data suspected to be fake that turned out to prove correct out of investigation lines? (and which made a story out).

    Of course you'd expect valid results to hold up on scruptiny, so my question is more along the lines of "is there any documented case where accusation of fake was demonstrated to be an attempt to destroy someone's work or reputation"?

    I don't have any example for a specific paper. In my field, there's been an investigation on a famous ecologist where fraud investigation turned out as 'unconclusive'. (no evidence leading to any firm conclusion). This, in my opinion, is somewhat the best to expect with regards to such accusations, which also means one will turn into the 'usual suspect' no matter what in the long term, even if actually innocent.

    And that, is freaking scary.

    I'd add that, in theory, one usually have back ups to open up to scruptiny, including samples of various quality and quantity (seeds, microscope slides, DNA samples). With two consequences:
    - how long should one keep experimental "byproducts" after results have been published? When space limitation is involved, how do you decide what to keep and what to throw out?
    - how does one proceed as a short term contractual worker (say, post-doc), especially with regard to side projects which did not involve including the PI as a coauthor?

  • drugmonkey says:

    7 years. That is what I always heard. Seems laughably short to me.

Leave a Reply