Alcohol research fraudster Michael W. Miller

Mar 03 2012 Published by under NIH funding, Scientific Misconduct

Per ORI, one Michael W Miller, most recently Chair of Neuroscience and Physiology at SUNY Upstate Medical University, is a data faker.

ORI finds that the Respondent engaged in research misconduct by falsifying and/or fabricating data that were included in grant applications R01 AA07568-18, R01 AA07568-18A1, R01 AA006916-25, and P50 AA017823-01 and in the following:

The "following" included some paper retractions detailed over at the RetractionWatch blog.

One of the interesting things is that this guy published in decidedly normal journals. There was something in the ORI finding about a PNAS submission, but that seemed to be the high IF watermark for Miller. I make special note of this since I am one of those fond of pointing out the positive correlation between journal IF and retractions.

You will be unsurprised that my attention is drawn in this case to the grant support. That year -18 grant renewal application mentioned? It got converted into an R37 MERIT (10 yrs of non competing renewals instead of the usual 5) in the A1 version. Which means it was scored very highly, from what I deduce about the R37 process. The P50 is, of course, a Center.

Big monetary commitment for NIAAA and very prestigious for Professor Miller.

Boo, hiss, fraudster bad guy....

Except think about those folks who didn't get something because if this guy. The Chair position was an external hire. The P50 took the place of another one- and it isn't just the PI/PD. Each competing Center that didn't get funded probably also had a handful of Component PIs. Who put in a lot of hard work and had a lot of great science. The R37? Well it probably counts at least double because of the 10 year interval. And of course some other worthy mid-career NIAAA scientist didn't receive this honor for her work.

I'm irritated on behalf of anyone who applied to NIAAA for grant support and didn't get the award during the interval they were supporting this Miller fraudster.

Final note. He was supposedly ratted out by someone in his lab. Since the offense seemed to be making up bar graphs rather than the all-too-typical duplicated gel/blot/image, there really would have been no other way to nail him. So the reviewers really can't be blamed for missing anything.

44 responses so far

  • Clay Clark says:

    I agree with you DM. It's unfortunate that others didn't get funded because his grants were. I'm curious about how often this type of fraud occurs relative to total funds expended at NIH. Any idea? Any thoughts on how we can be more proactive?

  • drugmonkey says:

    I don't know how we could possibly know. The ORI findings are a mere drop in the bucket. So from that perspective, not a large fraction at all.

    Of course, there simply must be fraud that goes undetected. The amount is impossible to estimate.

  • Beaker says:

    The frustrating part is that the punishment Miller received (retire in shame, and don't apply for any more grants) seems insufficient to right the wrongs. The money awarded to him cannot be recouped by the taxpayers.

  • Clay Clark says:

    This seems to be a topic that the NIH statisticians would be involved in - estimating the amount of $$ spent on fraudulent research.

    Each time I've sat on study section we start with the instructions "if you suspect fraud.....", but aside from being a collaborator or direct competitor (i.e. doing the research myself), I don't know how one would spot this.

  • As far as I am aware, the Federal Government is free to bring both civil and criminal cases against him, to recover funds and to punish him for committing crimes (which lying in grant applications to a federal agency is).

    BTW, in relation to the MERIT awards, as far as I am aware, the non-competing duration can be as short as five years and as long as ten years, depending on how the PI fares on Council review of progress and plans during the initial five-year award.

  • anon says:

    It should be routine for federal agencies to press criminal charges against those who deliberately falsify data. This guy is barely getting a slap on the wrist. The "punishment" amounts to a temporary "voluntary" suspension from the NIH club, and "To have his research supervised for a period of two (2) years immediately following the one (1) year period of exclusion.." He is not even losing his fucking job!!!

  • BTW, dude, how did you hear that one of his lab peeps ratted on him?

  • Beaker says:

    The info about the whistleblowing/ratting came from here:

    Questions about Miller’s research first surfaced in 2009 when people who worked in his lab came forward with allegations of scientific misconduct, said Steven Goodman, Upstate’s vice president for research. He said Upstate launched a lengthy investigation which included interviews with many individuals and a review of manuscripts, computer files and lab notes, he said.

  • Grumble says:

    "It should be routine for federal agencies to press criminal charges against those who deliberately falsify data." Yeah, but I'm guessing the standard of evidence is higher for criminal prosecution than for these sorts of "slaps on the wrist."

    Question: would the realistic threat of jail time deter scientific fraud? Presumably yes. I think it's all too easy to cheat when it's essentially just the honor system that keeps everyone in line.

  • drugmonkey says:

    What is the point of an R37 if not the extension of the non competing interval?

  • drugmonkey says:

    anon- comments about the blogs seem to indicate he no longer appears listed on the Uni website. Are you sure he didn't get canned over this?

  • (1) You have to apply to Council for the extension to follow the first five-year segment, but you are not "competing" and you don't go through regular study-section peer review: it is purely administrative. But depending on how you have done so far and what you are planning to do in the next segment, outcomes can range from "write a competing renewal" to "here's another five years". You know you can look this shitte uppe on the fucken NIH Web site, right?

    (2) In the Syracuse, NY, newspaper article, it says that he resigned his position at Upstate, and also at the veteran's administration hospital.

  • Oh, and another thing. It is certainly plausible that the government negotiates deals where fraudsters roll over and don't fight the HHS administrative finding of fraud and the administrative punishment in exchange for not being civilly or criminally charged.

  • drugmonkey says:

    I have never heard of anyone failing to extend their R37 is all...

  • I haven't either, but I've definitely heard of different length extensions, with the most common only a two-year second segment for a total of seven years.

  • anon says:

    DM - My earlier post was a knee-jerk response to the ORI findings and subsequent action (which is nothing, really). Beaker posted a link to a news article that stated that Michael Miller had retired. I don't know whether this means he was fired, forced to resign, or resigned on his own anticipating this sort of outcome. Still, it seems like a lot of these cheaters get off easy.

  • DJMH says:

    "Specifically, Dr. Miller: Fabricated bar graphs in Supplemental Figure 2 and related text in the PNAS manuscript to show that in select layers of the cortex, ethanol induced neuronal death occurred in post-natal day 10 (P10) mice;"

    What I'm taking from this is that if I'm ~7 months pregnant, drinking alcohol will NOT induce neuronal death in the cortex of my unborn child. Because otherwise he wouldn't have had to fake the data. Yes!!!

  • MudraFinger says:

    My own surveys of thousands of predominantly NIH funded scientists say, when asked point blank (anonymously, natch) whether they'd faked data in recent past - somewhere between 1-2% will admit it outright. No way to know, of course, how many don't admit it to an inquisitor, nor more importantly, to themselves. These figures jibe with similar surveys conducted in other countries in Europe and Australia (http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738). Higher proportions will admit to misappropriation of the words or ideas of others - upwards of 7%. Scale that up, and in a time of fiscal austerity (aka falling off the "funding cliff"), there's clearly good science that is not getting funded because of this chicanery.

    Still - how much more common is the phenomenon of positive spin or hyping findings in order to get that paper published in a higher IF journal, and to land that ever more vital grant renewal? Unfortunately, I've not yet asked that question of a systematic sample of scientists.

  • Teaching Assistant says:

    @ Clay Clark. "Any thoughts on how we can be more proactive?

    I think if you want to catch people like Miller you need a morally upstanding researcher in the know, inside his lab.

    We need to start with the undergrads. From my experience with hard science and engineering undergrads, they *learn* they must cheat in their very competitive programs.

    At my university, grad students must also do an online tutorial on academic and research misconduct. Which is followed by a non-trivial quiz. The literature says that this does reduce incidences of misconduct.

    Finally, going beyond Supplemental Material sections, I think all data (both primary and processed) and methods should be on the researcher's website free for anyone to peruse, all in one spot with no limitations.

    People like Miller seem to be driven by more than a publish-or-perish attidude and the need for sufficient funds. He was going for glory.

  • drugmonkey says:

    Oh, I think the need for "sufficient funds" is plenty to explain his behavior. At least, the "Where was the first slippery step" part. As to the why part, why he crossed the line when others do not....I don't know. To date I haven't reached a point where the thought of fakery has occurred to me as an option of any kind. I don't really get what leads up to this. Might be an interesting investigative journalism assignment to get to know the fakers, get them to describe their path. We're they the types that thought college cheating was okay? Did they start by throwing out an outlier or two? Or did they just all of a sudden make a blatant alteration to the bar graphs? Do they think the outcome was "right" and they just didn't "have the time" to prove it experimentally? Or do they know full well they are just making things up at the time?

  • drugmonkey says:

    But that's Sol Motherfuckin' Snyder!

    (but yeah, there ARE those extramural types who might as well be Intramural....who only bother toapply one round before they "need" to)

  • FYI, R37's can go much longer than 10 years.

    No, they cannot. If you look carefully at Snyder's grant history you linked to--but obviously only looked at superficially--you will see that he had to submit, have peer-reviewed by study section, and then have awarded competing renewals along the way that were then individually recommended for conversion to R37s by NIMH's Council. Each of these is a separate MERIT award.

  • MudraFinger says:

    DM - "As to the why part, why he crossed the line when others do not....I don't know."
    Wrong question. I think Teaching Assistant has the answer as to why. I think the learning that younger scientists in training get from watching their mentors and PIs hyping findings to get/keep funding & get high profile pubs is sufficient to corrupt the enterprise pretty thoroughly over a couple generations.

    Plus, I guess I just don't believe in slippery slopes all that much. I don't think smoking cigarettes causally leads to doing smack either, but that's more your line of work, not mine. I can show in my own data a substantially higher prevalence of self-reported data fakery among those who are willing to admit to "lesser sins," than among those who report only being angels. But I don't think the association is causal in the sense that the lesser sins eventually led to the data fakery. I also don't think that data fakers would be very good retrospective respondents about such topics - humans being pretty good at revisionist history and all.

    You claim you've never been tempted to fake data outright. But being an honest monkey, I suspect you might admit to being tempted to hype a finding or three?

  • You claim you've never been tempted to fake data outright. But being an honest monkey, I suspect you might admit to being tempted to hype a finding or three?

    What the fucke does hyping a fucken finding have to with faking motherfucken data????

  • MudraFinger says:

    I'd like to see a "Mythbusters" style funding system - one where the researcher's funding doesn't depend so much on whether they "bust" or "confirm" a myth (which is just a stand in for a hypothesis). I think we'd see a whole lot less hyping of findings in that kind of environment. Maybe we'd get more novel research as well. Models like those of HHMI and the Wellcome Trust might be a step in that direction - though doubtful such models could be scaled to the enormity of the current NIH funded workforce.
    http://onlinelibrary.wiley.com/doi/10.1111/j.1756-2171.2011.00140.x/abstract

  • drugmonkey says:

    "hype"? Moi? Never!

  • CATHERINE says:

    This whole thing is fascinating. I think some personalities are hardwired to hype. By which I mean are eternally overly optimistic about their results. I'm always the opposite. Allergic to hype and feel bad when I do it. But it *must* be done in this day and age to keep a lab on the map. I like to think of it as self promotion. But it is hype. And I do it but hate it.

    Another issue is that the more senior "mouths" to feed in a lab, the more pressure on the PI to keep th working. This situation persists beyond tenure, senior status, etc.

  • Clay Clark says:

    "Another issue is that the more senior "mouths" to feed in a lab, the more pressure on the PI to keep th working. This situation persists beyond tenure, senior status, etc."

    And - one of the senior "mouths" is that of the PI through summer salary. Might provide motivation for some to try to keep an edge over the competition.

  • I think we'd see a whole lot less hyping of findings in that kind of environment.

    What the fucken fucke is your problem, shittebagge? This post is about goddamn motherfucken FRAUD. This motherfucker made up bar graphs. "Hyping of findings" has nothing whatsoever to do with FRAUD. Just because you are an unimaginative boring douche with no ability to explain the excitement of your research doesn't mean that people who are good at it are doing something wrong.

  • drugmonkey says:

    MF-

    Why don't you believe in slippery slopes? You think the science world seems like a good route for people who are cheaterpantses from day one? Seems unlikely to me.....

  • Models like those of HHMI and the Wellcome Trust might be a step in that direction - though doubtful such models could be scaled to the enormity of the current NIH funded workforce.
    http://onlinelibrary.wiley.com/doi/10.1111/j.1756-2171.2011.00140.x/abstract

    The findings of that article are completely fucken worthless. The HHMI *picks* for funding the goddamn motherfuckers that have already established that they are more creative and productive than the norm. That they continue to be so after shifting over to HHMI funding doesn't indicate jacke diddly fucken shitte about whether that funding model encourages creativity and productivity.

  • Beaker says:

    One of the issues raised here is distinguishing between "putting your best forward" and outright fraud. In most professional endeavours, it is perfectly acceptable to mention the positive data supporting your hypothesis while ignoring/belittling the data against the idea.

    This is not the same as making shit up.

    We can discuss further whether it is OK to selectively ignore data that go against your hypothesis, but this behavior in itself does not meet the standard for fraud, as practiced by Miller.

    And yes--it's a slippery slope. If a particular statistical test indicates significance while another test does not, isn't picking the one that produces the desired result simply an example of "putting your best foot forward?" Yes, but the data are one thing and the analysis and interpretation of these data are something else entirely. If you fudge a bar graph (a fudge bar?), you cross a the line of integrity. Period.

  • MudraFinger says:

    What the fucke does hyping a fucken finding have to with faking motherfucken data????

    This post is about goddamn motherfucken FRAUD. This motherfucker made up bar graphs. "Hyping of findings" has nothing whatsoever to do with FRAUD.

    I was responding to DM's expression of concern about wasted resources and opportunity costs for legit scientists that result from fraud. My own read on the situation is that hyping is not so unrelated to fraud as you claim. Fraud is saying something is so that isn't with the intent to mislead others. Hyping is saying something is so that may or may not be so, arguably with the intent to lead others to the conclusion to which the hypster wishes them to come. The differences between the two then are largely a matter of degree. Of the two, hyping is the more insidious because it is so much more prevalent, and so much easier to overlook when it occurs. Certainly leads to a great degree of wasted resources and opportunity costs.

    http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124

    http://jama.ama-assn.org/content/294/2/218.short

  • MudraFinger says:

    Why don't you believe in slippery slopes? You think the science world seems like a good route for people who are cheaterpantses from day one?

    No, it's not that. Maybe I just don't like the metaphor. A slippery slope to me suggests that everyone who starts down a particular ill-conceived path is destined to end up a total cheaterpants, which seems unreasonable to me. I think that many who make mis-steps get corrected, or self-correct, and move on, never to go prodigal son again. Pair that with my belief that NOBODY can predict ahead of time with any degree of accuracy who will go the full-monty with data fakery, and one is led to look elsewhere for models to explain undesirable behavior in science, of all degrees of severity.

    Until someone comes up with a better prediction model and the mechanisms to weed them out of the system early, there will always be individuals who are willing to fake the data to get what they need/want.

    I also believe that there is a latent potential in some otherwise angelic individuals to go there, if and when the environmental conditions put them in a situation where they feel they have no other choices. Robert Merton talked about it in terms of "goal blockage," and the resulting behavior in response to such blockage as "innovation."

  • drugmonkey says:

    saying something is so that may or may not be so, arguably with the intent to lead others to the conclusion to which the hypster wishes them to come.

    yeah....see this is pretty much science. every thing is supposed to be treated skeptically and with an implied (or stated) "until some other data comes along to prove our inference wrong". right? everything "may or may not be so" and we only provisionally accept the latest evidence pending further results

  • Sid says:

    I suspect outright fraud with preliminary results is common. Unless one is a idiot and reuses a published gel or is very sloppy and/or gets dime dropped on- it is very hard to get caught. Meanwhile, the incentives for this kind of behavior have greatly increased in the last few tens of years (i.e. soft money positions, incentive and salary support pay etc..) And another thing, it may just be me, but my data always looks messy with contradictions (that's biology) . I have begun to get almost suspicious of these groups whose every publication has data that looks "too good".
    Sad commentary

  • Grumble says:

    "Models like those of HHMI and the Wellcome Trust might be a step in that direction - though doubtful such models could be scaled to the enormity of the current NIH funded workforce."

    If anything, HHMI investigators are exactly those who know how to hype their data the most vigorously. They might also be incredibly smart, but that doesn't get you anywhere in any social endeavor without the ability to sell your ideas. Hype is just not the problem here.

    I also disagree that an HHMI or Wellcome-style system can't work on an NIH scale. Of course it can. A portion of funding - significantly more than a few MERIT awards for the likes of Sol Snyder - needs to be dedicated to labs that have been productive in the past 5-10 years, based solely on that fact. Voila.

    The advantage wouldn't be less hype. It might result in less fraud, however, if "need to get funding" is a primary reason why some fraudsters do it. It might also make for more productive mid-career scientists, as they wouldn't have to write 10 grants to get 1.

  • anon says:

    That article is dated 2006 - for an award for earlier work. The fraudulent papers are from 2007 and later. Do you think once a fraudster always a fraudster?

  • DrugMonkey says:

    I think that we do not know for sure. Yes, I do think that confirmation of cheating now increases the odds of there having been prior cheating. I am not sure we have any data on that though.

    But I am also on record saying I don't think people *set out* in this business to cheat, rather that a series of competitive contingencies lead them down the slippery slope. That starting step? Something I'm very curious about.

    Perhaps in this case it was all the earlier successes (and perhaps being hired as a rainmaker chair) making this dude think he had to perform?

  • whimple says:

    I wonder if the first step is idiot reviewers asking for too much certainty in an uncertain business.

  • [...] convered Alcohol research fraudster Michael W. Miller in a prior post. This was following a finding of the ORI of the NIH and a note on retractionwatch. [...]

Leave a Reply


2 × = eighteen