Professor fired for misconduct shoots Dean

From the NYT account of the shooting of Dennis Charney:

A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men

why? Presumably revenge for :

In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.

As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.

The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI  investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”

Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.

To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.

The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.

Ok, so that might be why ORI declined to pursue the case against Dr. Chao.

The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"

One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."

This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.

There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.

The final conclusion in the recommendations section deserves special attention:

"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."

This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.

If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.

But the PI is still the one at fault.

I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.

28 responses so far

  • Completely agree that the PI should be investigated even if the misconduct is attributed to a student or postdoc. Setting things up so that students are rewarded for confirming pet theories is asking for someone to fake it.

  • Pinko Punko says:

    I have heard about/sort of seen this culture on the margins. Where everything is about proving the model. It is not good.

  • drugmonkey says:

    I think it is difficult for a lab not to suffer at least a tiny threat of this at any time. How do we keep it in check?

  • Pinko Punko says:

    You are definitely right. What I do is I constantly say "we don't care about the results- we just want what we publish to be correct, to the best of our ability"- I admit that sometimes different outcomes have different excitement levels, but in the end, the last thing we want to do is fool ourselves.

    It can be hard because you want to encourage excitement and the feeling of possibilities- that is what we like about science. So I make sure I always loop back to "we're not in love with the model- it is useful for us to design experiments- we want to test it"

  • eeke says:

    DM - I think that's easy. Be open to being wrong. Too many people (PI's) are focused on fame and glory and easily lose sight of what brought them there in the first place.

  • drugmonkey says:

    How about having to replicate some finding from the literature that "must be" true in the mind of the PI? Less of a threat?

  • GM says:

    Too many people (PI's) are focused on fame and glory and easily lose sight of what brought them there in the first place.

    Are you sure intellectual honesty is what "brought them there in the first place"?

    Because that is not what my impression is and is not what one would expect given the state of science today.

    In an environment in which nobody has time to read or think about anything in depth, success and promotion are determined in large part by one's ability to get others excited about his work, not so much by the work itself. Which means that overhyping and overselling of results is rewarded. And unconsciously encouraged. Overhyping is not misconduct, but it does set things on the road towards it.

  • baltogirl says:

    Another major reason to encourage honest results - even those that contradict the model- is that one has to work with the results long after the current postdoc leaves. If they lead me down the garden path by supplying results that are "supposed" to be right (but aren't), it is really damaging not only to the general scientific community but also to future grants, postdoc efforts etc.
    The only way to make sure this doesn't happen is to personally inspect at least SOME of the primary data.
    It is interesting that any grant containing research misconduct has to be repaid to the NIH...this is a significant sum of money for the institution to come up with!

  • eeke says:

    GM - I was thinking in terms of motivation for pursuing a scientific career; science is about discovering facts and truth. If you have to fudge stuff, I don't see the point. If the whole enterprise is about schmoozing and promotion, it's no wonder that we've lost public confidence and respect.

    DM- I've had experience with a student failing to replicate published results. It happens frequently. I contacted the original author, who had no explanation. I didn't have the resources to figure out why our results were different, so we moved on to something else. I don't think either the published data or our data were incorrect - all DATA. Rather, there had to be something different about the conditions used that we weren't getting. Organism(s) had changed, environmental factors, maybe the fucking water. The end result is that no one follows up on the published work (after 10 years, this still holds true in this particular case), other, robust information percolates to the surface over time. Am I too optimistic?

  • SidVic says:

    I'm paranoid about this stuff. I constantly stress to those in my lab that the data is-what-it-is and that it doesn't lie. I've found weirdly that many students don't trust their data. I also try to shrugg off negative data, even if i'm tempted to blame someone's bad technique, or whatever.

    I also cultivate a culture that regards data that goes in a manuscript as sacred and find myself giving speeches about integrity and reputation frequently (Good lord, i've become lame).

    I did have an experience where i suggested an experiment just to beef up a paper. Where is the metabolite X coming from? I knew already! Just confirm it, and we have an extra figure. the problem, an inhibitor of the feeder pathway didn't lower X. Maybe the inhibitor wasn't working, wasn't penetrating the tissue. Poor postdoc had to repeat the experiment using different permeatations of dose, and exotic, inhibitors. Finally desperate, i resorted to looking at the old merck poster with all the metabolic pathways shown in a single glorious busy mess. Long story short, one popped out that made ALOT of sense and it worked beautifully.

    Lessons learned: 1) I will always be grateful to that postdoc. It would have been much less work to give me the data i so obviously wanted. 2) negative results often are the most intersting, especially if perplexing. 3) i included the results in the paper. Made it stronger- "we evaluated senarion 2,3 and 4" but it was only senario 5 that proved correct" 4) The metabolic pathway in question had been derived in slugs or something; you never know when your work will solve someone elses problem.

  • odyssey says:

    Hypotheses are disposable. Data are not.

  • Grumble says:

    "How do we keep it in check?"

    By making it a point to keep telling your students and postdocs things like "I don't care what the results are, I just want them to be accurate."

    By taking the attitude that it's OK if my pet theory that I've been nurturing for the last 10 years is shown to be wrong by my students' experiments - because what excites me most is LEARNING something new about the world, not tooting my horn.

    By striving to maintain a rigorous attitude towards everyone's data, whether it's collected in your own lab or others. Constantly question a student's choice to select data - ask what the basis is and reinforce the idea that there must be a principle to selection, and it should be done sparingly. And call out data in other people's work that looks too beautiful, or where the authors don't describe how they selected the data, etc etc etc.

    By making sure you understand the correct statistics to use, and enforce that your students understand and practice best statistics, too.

    And so on. Fight bias consciously, because your only defense against it is awareness.

  • Dave says:

    I'm of the opinion that this kind of PI behavior (I know what the results should look like etc) is much more common than we all acknowledge. I see it everywhere, including in my own colleagues/collaborators.

    These PIs like to surround themselves with YES men who tow the line and play the game. Data integrity plays second fiddle to the need to be 'right' all the time.

  • Eli Rabett says:

    Problem is that the data does lie, or at least it dissembles

  • meshugena313 says:

    Horrible story. The crazy thing about all of this for me is that data that doesn't support your pet model or hypothesis is usually the source of the best discoveries.

    I'm very conscious in running my lab to never speak in terms of "we have to show" or those kind of words that may induce poor scientific practice. I wonder if the new rigor and responsibility push by the NIH and journals will help instill better approaches? Or is it just going to be boilerplate?

  • Noah shroyer says:

    This is an awful story but one that should be disseminated because it highlights the deep rabbit hole one can descend into. I think every person is susceptible to wanting to be right. This is shocking and horrible, but maybe the best we can do is to highlight the cautionary tale of self-delusion.

    You asked how we prevent it; in my lab I try to emphasize two points, verbally and publicly in lab meetings etc:
    1. I don't care what the data shows, I just want to have confidence in the result. I confirm most things 2-3 different ways. Probably overkill and it definitely slows us down, but it's a buffer against publishing the wrong stuff. Which still happens sometimes.
    2. Enjoy being wrong! My favorite days are when there is unequivocal data showing our hypothesis is wrong!! I tell everyone in the lab that it's really GOOD to be wrong, because it means we are asking interesting questions! If we are always right, then we just aren't asking hard enough questions. That usually helps everyone see how important being wrong is.

  • Grumble says:

    @Meshugena: "I wonder if the new rigor and responsibility push by the NIH ... will help instill better approaches?"

    AAAAAAAAAAHAHAHAHAHAHAHAHAHAHA!

    You're joking, right?

  • drugmonkey says:

    Every lab should have that quote about discovery posted.

    "Science discoveries most often arise not from 'Eureka', but rather from 'Huh, that's funny."

  • Jonathan Badger says:

    So quoting Asimov is okay? Never sure which sources of science quotes you like -- I remember you dislike Feynman.

  • Ola says:

    In my somewhat limited dealings with journals when I've had occasion to report problem data and attempt to get things retracted, the old "throw the post-doc' under a bus" approach is quite common. Accompanying this response, what I find surprising (and can never really understand based on the way I run my lab') is the amount of churn in other people's lab's. A common response appears to be "the person is no longer in my lab", usually accompanied by "back to their home country" and "we're having trouble locating original records".

    I tend to have people stick around for a while, and most certainly ensure that they put everything in good order before leaving. In-fact, I don't think I've ever had a former student submit a paper after they've left the lab. If it's important enough to warrant publication, deal with it before you leave. If it was not important enough to deal with before you left, it probably shouldn't be published anyway.

    Where do these folks get off, trying to cobble together manuscripts from half-assed data-sets of post-docs who went back to China 2 years ago? Garbage in - garbage out.

  • meshugena313 says:

    @grumble-
    while I love me a good joke, in this case I was kinda serious. I'm wondering if even writing the words makes people pay more attention. I'll admit to being more attentive to these issues in our own work and in manuscript reviews recently. I've brought it up in student committee meetings, etc. we'll see.

  • AcademicLurker says:

    Speaking of misconduct, check out the latest shenanigans at Duke. My favorite detail is the bit about making personal purchases on Amazon using grant money.

  • Grumble says:

    Making people jump through hoops is a government specialty, but this particular hoop - forcing people to write some bullshit about how they verify biological resources - is not going to magically increase rigor.

  • EPJ says:

    I agree on the issue of hiring people that have certain qualities that are best fitting for a given project, or even a lab over all situation at that particular time. And it is those qualities that will yield the end result when influenced by the work environment, with the most stringent "wash" done by the PI/PIs before publication is sent. But as you know there are other checks in the journals, and that influences the publication quality and timing, completing a cycle that determines the fate of people's future and the project's pace. Then add the consequences for the institutions and the bulk of what makes them, which is the main pressure on the PI.

    When you think about that it means that an intense joyful activity and plan is turned into a more of punishing one is easier to understand why some people loose self control and go into a fighting modality or other approaches to attain a certain a result. Some times people physically punch each other, or verbally, or psychologically, generating a rather difficult environment to do anything constructive.

    You have to wonder about that kind of behavior, and even politely address it in privacy though it could be worthless, or if publicly done take your chances to have a statistical effect rather than going around exerting violence, by any means, when you are more of a true pacifist or have other ways to handle it.

    Eventually the truth will show up. Damage shouldn't be the main motivation for science, education, or medicine. That should be a strong goal for self and the 'crowd'.

    I wonder why society is in this situation. What can it be done by the bulk of people to neutralize it?

  • The Other Dave says:

    100% in agreement. The PI always shares blame for any crap that comes out of the lab.

  • Draino says:

    @Ola
    Maybe what you see as "churn" is just postdocs and students moving on in their training at a decent pace even though journals take a ridiculously long time to complete the process of peer review and publication. By the time you find an error, you better hope that trainee isn't still in the same lab. The data for the paper was probably completed years before it appeared in the latest TOC.

  • JC says:

    I repeatedly emphasize in my lab that I will never punish a student for a mistake. In some cases, I've asked them to give a lab meeting presentation about the carelessness that led to it, but I do not tear them down over it. I'm not strictly a purist, but as someone said above, if we have to cheat or fudge data to progress, what's the point? It ceases to be fun.

  • […] of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. […]

Leave a Reply