Archive for the 'Scientific Misconduct' category

Rehabilitation of Science Cheaters

May 17 2018 Published by under Scientific Misconduct

Nature relates a case of a convicted science cheat attempting to rehabilitate himself.

last August, the University of Tokyo announced that five of Watanabe’s papers contained manipulated images and improperly merged data sets that amounted to scientific misconduct. One of those papers has since been retracted and two have been corrected. Two others have corrections under consideration, according to Watanabe. Another university investigation into nine other papers found no evidence of misconduct.

ok, pretty standard stuff. Dude busted for manipulating images. Five papers involved so it isn't just a one time oopsie.

Watanabe says that the university’s investigation made him aware of “issues concerning contrast in pictures and checking original imaging files”. He says, however, that he did not intend to deceive and that the issues did not affect the main conclusions of the papers.

They always claim that. Oh, it doesn't change the results so it isn't fraud. Oh? Well if you needed that to get the paper accepted (and by definition you did) then it was fraud. Whether it changes the overall conclusions or whether (as is claimed in other cases) the data can be legitimately re-created is immaterial to the fraud.

Julia Cooper, a molecular biologist at the US National Cancer Institute in Bethesda, Maryland, says that data manipulation is never acceptable. But she thinks the sanctions were too harsh and incommensurate with the degree of wrongdoing. “Yoshinori absolutely deserves a second chance,” she says.

This is, of course, the central question for today's discussion. Should we let science cheats re-enter science? Can they be "rehabilitated"? Should they be?

Uhlmann is unsure whether it will make a difference. He commends Watanabe’s willingness to engage with his retraining, but says “we will only know at the end of it whether his heart is where his mouth is”.

Watanabe emphasizes that his willingness to embark on the training and acknowledgement that he made errors is evidence that he will change his ways.

Fascinating, right? Watanabe says the investigation brought it to his attention that he was doing something wrong and he claims it as an "error" rather than saying "yeah, man, I faked data and I got caught". Which one of these attitudes do you think predict a successful rehabilitation?

and, where should such a person receive their rehabilitation?

[Watanabe is] embarking on an intensive retraining programme with Nobel prizewinner Paul Nurse in London.
...
Nurse, who mentored Watanabe when he was a postdoctoral researcher in the 1990s, thinks that the biologist deserves the opportunity to redeem himself. “The research community and institutions need to think more about how to handle rehabilitation in cases like this,” says Nurse, a cell biologist and director of the Francis Crick Institute in London. Nurse declined to comment further on the retraining.

So. He's going to be "rehabilitated" by the guy who trained him as a postdoc and this supervisor refuses to comment on how this rehabilitation is to be conducted or, critically, evaluated for success.

Interesting.

__
H/t a certain notorious troll

14 responses so far

On reviewing scientific work from known sexual harassers, retaliators, bigots and generalized jerks of science

On a recent post, DNAMan asks:

If you were reviewing an NIH proposal from a PI who was a known (or widely rumored) sexual harasser, would you take that into account? How?

My immediate answer was:

I don't know about "widely rumored". But if I was convinced someone was a sexual harasser this would render me unable to fairly judge the application. So I would recuse myself and tell the SRO why I was doing so. As one is expected to do for any conflicts that one recognizes about the proposal.

I'm promoting this to a new post because this also came up in the Twitter discussion of Lander's toast of Jim Watson. Apparently this is not obvious to everyone.

One is supposed to refuse to review grant proposals, and manuscripts submitted for publication, if one feels that one has a conflict of interest that renders the review biased. This is very clear. Formal guidelines tend to concentrate on personal financial benefits (i.e. standing to gain from a company in which one has ownership or other financial interest), institutional benefits (i.e., you cannot review NIH grants submitted from your University since the University is, after all, the applicant and you are an agent of that University) and mentoring / collaborating interests (typically expressed as co-publication or mentoring formally in past three years). Nevertheless there is a clear expectation, spelled out in some cases, that you should refuse to take a review assignment if you feel that you cannot be unbiased.

This is beyond any formal guidelines. A general ethical principle.

There is a LOT of grey area.

As I frequently relate, in my early years when a famous Editor asked me to review a manuscript from one of my tighter science homies and I pointed out this relationship I was told "If I had to use that standard as the Editor I would never get anything reviewed. Just do it. I know you are friends.".

I may also have mentioned that when first on study section I queried an SRO about doing reviews for PIs who were scientifically sort of close to my work. I was told a similar thing about how reviews would never get done if vaguely working in the same areas and maybe one day competing on some topic were the standard for COI recusal.

So we are, for the most part, left up to our own devices and ethics about when we identify a bias in ourselves and refuse to do peer review because of this conflict.

I have occasionally refused to review an NIH grant because the PI was simply too good of a friend. I can't recall being asked to review a grant proposal from anyone I dislike personally or professionally enough to trigger my personal threshold.

I am convinced, however, that I would recuse myself from the review of proposals or manuscripts from any person that I know to be a sexual harasser, a retaliator and/or a bigot against women, underrepresented groups generally, LGBTQ, and the like.

There is a flavor of apologist for Jim Watson (et rascalia) that wants to pursue a "slippery slope" argument. Just Asking the Questions. You know the type. One or two of these popped up on twitter over the weekend but I'm too lazy to go back and find the thread.

The JAQ-off response is along the lines of "What about people who have politics you don't like? Would you recuse yourself from a Trump voter?".

The answer is no.

Now sure, the topic of implicit or unconscious bias came up and it is problematic for sure. We cannot recuse ourselves when we do not recognize our bias. But I would argue that this does not in any way suggest that we shouldn't recuse ourselves when we DO recognize our biases. There is a severity factor here. I may have implicit bias against someone in my field that I know to be a Republican. Or I may not. And when there is a clear and explicit bias, we should recuse.

I do not believe that people who have proven themselves to be sexual harassers or bigots on the scale of Jim Watson deserve NIH grant funding. I do not believe their science is going to be so much superior to all of the other applicants that it needs to be funded. And so if the NIH disagrees with me, by letting them participate in peer review, clearly I cannot do an unbiased job of what NIH is asking me to do.

The manuscript review issue is a bit different. It is not zero-sum and I never review that way, even for the supposedly most-selective journals that ask me to review. There is no particular reason to spread scoring, so to speak, as it would be done for grant application review. But I think it boils down to essentially the same thing. The Editor has decided that the paper should go out for review and it is likely that I will be more critical than otherwise.

So....can anyone see any huge problems here? Peer review of grants and manuscripts is opt-in. Nobody is really obliged to participate at all. And we are expected to manage the most obvious of biases by recusal.

38 responses so far

Ethics reminder for scientists

If the lab head tells the trainees or techs that a specific experimental outcome* must be generated by them, this is scientific misconduct.

If the lab head says a specific experimental outcome is necessary to publish the paper, this may be very close to misconduct or it may be completely aboveboard, depending on context. The best context to set is a constant mantra that any outcome teaches us more about reality and that is the real goal.

--
*no we are not talking about assay validation and similar technical development stuff.

15 responses so far

Does it matter how the data are collected?

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don't.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn't matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn't matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden's top-ranked university was just sacked because of the Macchiarini scandal

13 responses so far

No, Cell, the replication does not have bearing on the original fraud

Sep 12 2016 Published by under Scientific Misconduct, Scientific Publication

Via the usual relentless trolling of YHN from Comrade PhysioProffe, a note on a fraud investigation from the editors of Cell.

We, the editors of Cell, published an Editorial Expression of Concern (http://dx.doi.org/10.1016/j.cell.2016.03.038) earlier this year regarding issues raised about Figures 2F, 2H, and 3G of the above article.
...
two labs have now completed their experiments, and their data largely confirm the central conclusions drawn from the original figures. Although this does not resolve the conflicting claims, based on the information available to us at this time, we will take no further action. We would like to thank the independent labs who invested significant time and effort in ensuring the accuracy of the scientific record.

Bad Cell. BAD!

We see this all the time, although usually it is the original authors aided and abetted by the journal Editors, rather than the journal itself, making this claim. No matter if it is a claim to replace an "erroneous placeholder figure", or a full on retraction by the "good" authors for fraud perpetrated by some [nonWestern] postdoc who cannot be located anymore, we see an attempt to maintain the priority claim. "Several labs have replicated and extended our work", is how it goes if the paper is an old one. "We've replicated the bad [nonWestern, can't be located] postdoc's work" if the paper is newer.

I say "aided and abetted" because the Editors have to approve the language of the authors' erratum, corrigendum or retraction notice. They permit this. Why? Well obviously because just as the authors need to protect their reputation, so does the journal.

So everyone plays this game that somehow proving the original claims were correct, reliable or true means that the original offense is lesser. And that the remaining "good" authors and the journal should get credited for publishing it.

I say this is wrong. If the data were faked, the finding was not supported. Or not supported to the degree that it would have been accepted for publication in that particular journal. And therefore there should be no credit for the work.

We all know that there is a priority and Impact Factor chase in certain types of science. Anything published in Cell quite obviously qualifies for the most cutthroat aspects of this particular game. Authors and editors alike are complicit.

If something is perceived to be hott stuff, both parties are motivated to get the finding published. First. Before those other guys. So...corners are occasionally cut. Authors and Editors both do this.

Rewarding the high risk behavior that leads to such retractions and frauds is not a good thing. While I think punishing proven fraudsters is important, it does not by any means go far enough.

We need to remove the positive reward environment. Look at it this way. If you intentionally fake data, or more likely subsets of the data, to get past that final review hurdle into a Cell acceptance, you are probably not very likely to get caught. If you are detected, it will often take years for this to come to light, particularly when it comes to a proven-beyond-doubt standard. In the mean time, you have enjoyed all the career benefits of that Glamour paper. Job offers for the postdocs. Grant awards for the PIs. Promotions. High $$ recruitment or retention packages. And generated even more Glam studies. So in the somewhat unlikely case of being busted for the original fake many of the beneficiaries, save the poor sucker nonWestern postdoc (who cannot be located), are able to defend and evade based on stature.

This gentleman's agreement to view faked results that happen to replicate as no-harm, no-foul is part of this process. It encourages faking and fraud. It should be stopped.

One more interesting part of this case. It was actually raised by the self-confessed cheater!

Yao-Yun Liang of the above article informed us, the Cell editors, that he manipulated the experiments to achieve predetermined results in Figures 2F, 2H, and 3G. The corresponding author of the paper, Xin-Hua Feng, has refuted the validity of Liang’s claims, citing concerns about Liang’s motives and credibility. In a continuing process, we have consulted with the authors, the corresponding author’s institution, and the Committee on Publication Ethics (COPE), and we have evaluated the available original data. The Committee on Scientific Integrity at the corresponding author’s institution, Baylor College of Medicine, conducted a preliminary inquiry that was inconclusive and recommended no further action. As the institution’s inquiry was inconclusive and it has been difficult to adjudicate the conflicting claims, we have provided the corresponding author an opportunity to arrange repetition of the experiments in question by independent labs.

Kind of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. Cohn in the Mt. Sinai case, i.e., that the lab head created a culture of fraud or directly requested the fake? In the latter case, it looked like they probably only came down on the PI because of a smoking-gun email and the perceived credibility of the witnesses. Remember that ORI refused to take up the case so there probably was very little hard evidence on which to proceed. I'd bet that an inability to get beyond "he-said/he-said" is probably at the root of Baylor's "inconclusive" preliminary inquiry result for this Liang/Feng dispute.

33 responses so far

Professor fired for misconduct shoots Dean

From the NYT account of the shooting of Dennis Charney:

A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men

why? Presumably revenge for :

In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.

As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.

The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI  investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”

Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.

To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.

The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.

Ok, so that might be why ORI declined to pursue the case against Dr. Chao.

The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"

One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."

This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.

There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.

The final conclusion in the recommendations section deserves special attention:

"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."

This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.

If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.

But the PI is still the one at fault.

I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.

29 responses so far

University attempts to rescind PhD awarded to alleged cheater

Mar 24 2016 Published by under Scientific Misconduct

at RetractionWatch:

After the University of Texas postponed a hearing to determine whether it should revoke a chemist’s PhD, her lawyer has filed a motion to stop the proceedings, and requested the school pay her $95,099 in lawyer fees and expenses.

We have discussed individuals convicted of scientific fraud in the course of doctoral studies before and wondered if a University could or would attempt to retract the doctoral award. Well, looks like this is one of those cases.

The Austin Statesman reports:

In Orr’s case, UT administrators moved to revoke her degree after finding that “scientific misconduct occurred in the production of your dissertation,” according to a letter to Orr from Judith Langlois, senior vice provost and dean of graduate studies.

The dissertation committee concluded that work related to “falsified and misreported data cannot be included in a dissertation and that the remaining work described in the dissertation is insufficient to support the award” of a Ph.D.,” Langlois wrote. Orr was invited to submit a new thesis summarizing other work to earn a master’s degree.

This is interesting because the justification is not that she is being punished for being a faker, otherwise why would they invite her to submit a master's thesis? The justification is that ignoring the allegedly falsified work leaves her short of a minimum qualification for the doctorate. Given the flexibility involved with doctoral committee requirements and the sheer scope of data usually involved in a thesis, my eyebrows are raising at this. Back to the RetractionWatch piece:

The motion for final summary judgment includes an affidavit from Phillip Magnus, a chemistry professor at UT, who argues that...

her dissertation consisted of two branches of work towards alkaloid natural products and a methodology project to generate novel structures. She characterized about 100 organic compounds in her dissertation. Even without completed syntheses of natural products, her research towards the natural products was significant, and provided her the training to become a skillful and passionate scientist. Being correct or incorrect is part of scientific research. Being correct, or synthesizing a particular molecule are not requirements for passing a course at the University, or obtaining a Ph D degree. Furthermore, the possibility of being wrong is not a justifiable reason to rescind a former student’s degree.

Yeah, this certainly points at a usual sticking point between the RetractionWatch types and me.

It is ESSENTIAL to differentiate between merely being wrong or mistaken (or even sloppy) and intentional fraud.

The Austin Statesman piece goes on to detail how the supervising PI and a subsequent postdoc wanted to build on Dr. Orr's work and she told them to re-do certain work. They didn't, published a paper (with her as author) and it was subsequently retracted for a chemical step being non-reproducible. Was her warning due to knowing she'd faked some results? Or was it due to her gut feeling that it just wasn't as nailed down as some other results and she'd like to see it replicated before publishing? Did her own subsequent work cast doubt on her prior (valid but perhaps mistaken) work? Etc.

23 responses so far

Priority

I am working up a serious antipathy to the notion of scientific priority, spurred most recently by the #ASAPbio conference and the associated fervent promotion of pre-print deposit of scientific manuscripts.

In science, the concept of priority refers to the fact that we think of the first person to [think up, discover, demonstrate, support, prove, find, establish] something as somehow special and deserving of credit.

For example, the first paleontologist to show that this odd collection of fossils over here belonged to a species of Megatyrannoteethdeath* not previously known to us gets a lot of street cred for a new discovery.

Watson and Crick, similarly, are famed for working out the double helical structure of DNA** because they provided the scientific community with convincing amounts of rationale and evidence first.

Etc.

Typically the most special thing about the scientists being respected is that they got there first. Someone else could have stumbled across the right bits of fossil. Many someones were hotly trying to determine how DNA was structured and how it worked.

This is the case for much of modern bioscience. There are typically many someones that have at least thought about a given issue, problem or puzzle. Many who have spent more than just a tiny bit of thought on it. Sometimes multiple scientists (or scientific groups, typically) are independently working on a given idea, concept, biological system, puzzle or whathaveyou.

As in much of life, to the victor go the spoils. Meaning the Nobel prize in some cases. Meaning critical grant funding in other cases- funding that not only pays the salary of the scientists with priority but that goes to support their pursuit of other "first" discoveries. Remember in the Jurassic Park movies how the sober paleontology work was so desperately in need of research funds? That. In addition, the priority of a finding might dictate which junior scientists get Professorial rank jobs, the all-important credit for publication in a desired rank of scientific journal and ultimately the incremental accumulation of citations to that paper. Finally, if there ends up being a commercial value angle, the ones who have this priority may profit from that fact.

It's all very American, right? Get there first, do something someone else has not done and you should profit from that accomplishment. yeeehaw***.

Problem is......****

The pursuit of priority holds back the progress of science in many ways. It keeps people from working on a topic because they figure that some other lab is way ahead of them and will beat them to the punch (science always can use a different take, no two labs come up with the exact same constellation of evidence). It unfairly keeps people from being able to get rewarded for their work (in a multi-year, multi-person, expensive pursuit of the same thing does it make sense that a 2 week difference in when a manuscript is submitted is all-critical to the credit?). It keeps people from collaborating or sharing their ideas lest someone else swoop in and score the credit by publishing first. It can fuel the inability to replicate findings (what if the group with priority was wrong and nobody else bothered to put the effort in because they couldn't get enough credit?).

These are the things I am pondering as we rush forward with the idea that pre-publication manuscripts should be publicized in a pre-print archive. One of the universally promoted reasons for this need is, in fact, scientific priority. Which has a very, very large downside to it.
__
*I made that Genus up but if anyone wants to use it, feel free

**no, not for being dicks. that came later.

***NSFW

****NSFW

45 responses so far

Story boarding

When you "storyboard" the way a figure or figures for a scientific manuscript should look, or need to look, to make your point, you are on a very slippery slope.

It sets up a situation where you need the data to come out a particular way to fit the story you want to tell.

This leads to all kinds of bad shenanigans. From outright fakery to re-running experiments until you get it to look the way you want. 

Story boarding is for telling fictional stories. 

Science is for telling non-fiction stories. 

These are created after the fact. After the data are collected. With no need for storyboarding the narrative in advance.

32 responses so far

Placeholder figures

Honest scientists do not use "placeholder" images when creating manuscript figures. Period. 

See this nonsense from Cell

22 responses so far

Older posts »