The Common Man has been posting on MLB's cheater-pants-o-the-week story and his effort is fanning a slow flame in Your Humble Narrator which was started by PhysioProf. You will recall that PP's post dealt with a seemingly run-of-the-mill paper retraction story in which the authors admitted that there were enough deficiencies in the published work that the only choice was to retract the entire article.
As the story develops I am becoming convinced that there is more to this little scenario than one random postdoc faking the odd control.
True enough, I am cynical when it comes to issues of scientific fraud, I am not some wide eyed naif here saying "OMG! Can you believe it???". It must also be admitted that the type of science that always seems to get busted for data faking (whether it be fake subject entries in human clinical trials or bench lab shenanigans with blots and gels) is unlikely to really affect me. My areas of scientific interest are relatively
cheat exposed-scandal-retraction free (save that one little incident). So why would I be getting all red about this?
Well, remember my post on people failing to get their grants and facing lab closure? Do you hear the anguish of those who just want a chance to play and feel like that chance will never come?
Well, scientific cheaters profitand the fraud is by no means a victimless crime (even when we are not talking clinical research and the findings will never cause a mistake that hurts patients).
Recall the Linda Buck retraction and the hapless blamed postdoc, Z. Zou who it turned out had managed to win an Assistant Professor position? [Unfortunately dude was caught up in the mass layoffs following Hurrican Ike, see comment thread here.] Did you read that coverage writedit had of a fraudster who perpetuated his data fakery for a very long time indeed, moving from grad school to an independent posting?
Getting back to the case at hand, I'm not even sure who we should be looking at but something smells like more rain's a-coming. One commenter alluded to a discussion on "another bbs" which I can but assume is this one, which had been pounding this blog with referrals. That thread seems to be focused on Wen-Ming Chu, previously a postdoc in the Michael Karin (Research Crossroads Grant Profile) lab and now with his own appointment at Brown University (Research Crossroads Grant Profile).
The comment thread after PP's post was sprinkled with a few specific
accusations leads for anyone to go and check for themselves whether the Karin lab or Professor Chu or random assorted trainees are the source of the stench. I was particularly struck by this comment:
Did you guys notice that in the previous issue of Cell, MK published an erratum for the Budanov and Karin paper for similar shady image processing irregularities. Why does the Budanov and Karin paper get to be corrected but the other paper retracted?
Let me note first that most of the comments have been spot on in tone and strategy. It is fantastic to refer us to the specific figures that one feels are fishy. Let those who are familiar with like data make the read for themselves. I like. Way better than just saying "Everyone in the field thinks they are fakers".
But here's the thing. As evidence of chronic data fakery from any one lab or any one scientist starts to accumulate we must start to view their grants and even faculty appointments as having been won unfairly. You can say "Oh, it is just one (or three) retractions" if you like. Sure. But how many people without jobs have been hearing PhysioProf's pronouncements about what it takes to land the very top jobs? Have read my various carping bits about how CNS publications are competitive> You might also think about the very close and sometime arbitrary calls about grant scoring and wonder how that one extra CNS paper might tip the balance in someone's favor.
It is not ridiculous to remember that every CNS pub which gets accepted undeserving takes the place of another deserving work. Every new hire who gets the offer on the strength of that cool finding or that extra CNS paper takes the place of someone else. Most obviously, every grant that gets awarded to a data faker is one more grant not going to the righteous. It should be obvious that there is no way to fix these shaftings years after the fact when the frauds come to light.
We need to start fixing this problem.
The first school of thought is that we need to do more in terms of ethical education of trainees. You know, that one day snore-fest with the laughably obvious "scenarios" that you had to endure as a first or second year postdoc? What a crock. Don't get me wrong, I think we should have those basic introductions but we, as a business, need to do much, much more.
The rationale behind the hypothesis that more training will fix the problem is that people somehow do not know what the expectations are with respect to the conduct of science. That individuals are somehow under the illusion that it is okay to simply make up data.
C'mon. That is nonsense.
What is much more likely is that there are systematic features of the environment that encourage some individuals to start down the slippery slope. The thing that I am heating up about is that I think it is high time that some data were collected. Now, as always, the go-to on this discussion is the Nath et al, 2006 paper on retractions. It has some limitations, primarily what I suspect might be a too-credulous acceptance of "unintentional mistakes" but it is most certainly a start.
Here are some other things I'd like to see tracked and analyzed with respect to retractions. Funding source, right down to the NIH Institute or Center if necessary. Funding mechanism (Big Mech versus R01). The journals in which it was published. The local Universities or Institutes. Age and gender and ethnicity of the PI. Scientific domain. Etc.
Soup. to. Nuts.
The reason is that it is all too easy to dismiss sketchy retractions and admissions of fraud on the paper by paper basis. It is harder to get a bead or clear view on systematic practices.
My hypothesis, of course, is that obsessive pursuit of the hawtest and greatest, pursuit of CNS pubs for their own sake, huge labs operating in fast moving areas where scooping is rampant, pressure is overwhelming to be first instead of best, editors who pressure for rapid return of not-yet-accepted manuscripts while demanding huge amounts of extra work, perceptions that a CNS pub is the categorical difference between being considered for jobs or not (Hi, PP!), etc all contribute systematic or systemic pressures which lead some people to the conclusion that faking a band or six is the right choice.
Ultimately it is the individual to blame* but come. on. These incidents are a product of more than just an individual with bad ethical grounding. They are a reflection of a certain dismal aspect of scientific culture.
I submit that viewing retraction, erratum and corrigendum patterns in a more comprehensive manner would help us to target the cultural problems and thereby reduce scientific cheating.
*but honestly. A GlamourMag submission for the lab? Reviewed and re-reviewed by the multiple authors in draft form even before being submitted? The hawtest and greatest thing that has ever happened to the juniormost one or two authors? And nobody notices these duplicated gels and blots that even someone unfamiliar such as myself can see when it is pointed out where to look? Please.
Update 12/12/09: The Molecular Philosophy blog has an interesting take on another fraudster, Luk van Parijs, but much is relevant to this case.
After all how is scientific misconduct different from stealing???? The guy fabricates data, publishes it in an awesome journal, gets all the fame and glory of a great researcher. Then he lands a position at MIT, gets grant money from NIH, gets paid to travel to give talks abroad, and lives a pretty comfy life off of the taxpayers' money. All this while other people waste precious time and huge sums of NIH dollars trying to replicate his nonexistent results.
Maybe, just maybe, such super high-profile, super-competitive labs put too much stress on being productive and making hawt science at the expense of integrity. Maybe van Parijs never heard that all that counts is NOT publishing in Nature, but doing honest good science.