Archive for the 'Science Ethics' category

On Internalizing the Ethical Standards of Scientific Research

Jul 24 2013 Published by under Academics, Science Ethics

There is an entry up on the Scientific American Blog Network's Guest Blog by two of the principals of μBiome. In Crowdfunding and IRBs: The Case of uBiome Jessica Richman and Zachary Apte address prior criticism of their approach to the treatment of human subjects. In particular, the criticism over their failure to obtain approval from an Institutional Review Board (IRB) prior to enrolling subjects in their study.

In February, there were several posts about the ethics of this choice from a variety of bloggers. (See links from Boundary Layer Physiology (here, here, here) Comradde Physioprof (here, here, here), Drugmonkey (here), Janet Stemwedel (here), Peter Lipson (here).) We greatly appreciate the comments, suggestions and criticisms that were made. Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach.

If you follow the linked blog posts, you will find that when Richman and/or Apte engaged with the arguments, they took a wounded tone. This is a stance they continue.

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I was one of the ones who brought up the Tuskegee Syphilis Experiment. Naturally, this was by way of making an illustrative example of why we have modern oversight of research experiments. I did not anticipate that any of the research planned by the uBiome folks would border on this sort of horrible mistreatment of research subjects. Not at all. And mentioning that older history does not so accuse them either.

PhysioProf made this point very well.

UPDATE 2: The need for IRB review has little to do with researchers’ intentions to behave ethically–nowadays it is rare that we are talking about genuinely evil exploitative abusive shitte–but rather that it is surprisingly complicated to actually implement processes, procedures, and protocls that thoroughly safeguard human subjects’ rights and safety, even with the best of intentions. This inquiry has absolutely nothing to so with whether the uBiome people are nice guys who just want to do cool science with the best of intentions. That is irrelevant.

IRBs are there exactly to ensure that earnest scientists with the best of intentions in their hearts are forced to think through all of the possible ramifications of their proposed human subjects research projects in a thorough and systematic manner before they embark on their research. The evidence we are in possession of as of now suggests strongly that uBiome has not done so.

This is a critical reason why scientists using human or animal subjects need to adhere to the oversight/protection mechanisms. The second critical reason is that the people doing the research are biased. Again, it is not the case that one thinks all scientists are setting out to do horrible Mengele type stuff in pursuit of their obsessions. No. It is that we all are subject to subtle influences on our thinking. And we, as humans, have a very well documented propensity to see things our own way, so to speak. Even when we think we are being totally objective and/or professional. By the very nature of this, we are unable to see for ourselves where we are going astray.

Thus, external oversight and review provides a needed check on our own inevitable bias.

We can all grumble about our battles with IRBs (and Institutional Animal Care and Use Committees for animal subject research). The process is far from perfect so a little bit of criticism is to be expected.

Nevertheless I argue that we should all embrace the oversight process unreservedly and enthusiastically. We should be proud, in fact, that we conduct our research under such professional rules. And we should not operate grudgingly, ever seeking to evade or bypass the IRB/IACUC process.

Richman and Apte of μBiome need to take this final step in understanding. They are not quite there yet:

Before we started our crowdfunding campaign, we consulted with our advisors at QB3, the startup incubator at UCSF, and the lawyers they provided us. We were informed (correctly) that IRBs are only required for federally funded projects, clinical trials, and those who seek publication in peer-reviewed journals. That’s right — projects that don’t want federal money, FDA approval, or to publish in traditional journals require no ethical review at all as far as we know.

Well, that is just plain wrong. Being a professional scientist is what "requires" us to seek oversight of our experiments. I believe I've used the example in the past of someone like me buying a few operant chambers out of University Surplus, setting them up in my garage and buying some rats from the local pet store. I could do this. I could do this without violating any laws. I could dose them* with all sorts of legally-obtainable substances, very likely. Sure, no legitimate journal would take my manuscript but heck, aren't we in an era where the open access wackaloons are advocating self-publishing everything on blogs? I could do that. Or, more perniciously, this could be my little pilot study incubator. Once I figured I was on to something, then I could put the protocols through my local IACUC and do the "real" study and nobody would be the wiser.

Nobody except me, that is. And this is why such a thing is never going to happen. Because I know it is a violation of my professional obligations as I see them.

Back to Richman and Apte's excuse making:

Although we are incubated in the UCSF QB3 Garage, we were told that we could not use UCSF’s IRB process and that we would have to pay thousands of dollars for an external IRB. We didn’t think it made sense (and in fact, we had no money) to pay thousands of dollars on the off chance that our crowdfunding campaign was a success.

and whining

We are happy to say that we have completed IRB review and that our protocol has been approved. The process was extremely time-consuming, and expensive. We went back and forth for months to finally receive approval, exchanging literally hundreds of pages of documents. We spent hundreds of hours on the project.

First, whatever the UCSF QB3 Garage is, it was screwing up if it never considered such issues. Second, crying poverty is no excuse. None whatsoever. Do we really have to examine how many evils could be covered under "we couldn't afford it"? Admittedly, this is a problem for this whole idea of crowd-funded science but..so what? Solve it. Just like they** had to solve the mechanisms for soliciting the donations in the first place. Third....yeah. Doing things ethically does require some effort. Just like conducting experiments and raising the funds to support them requires effort. Stop with the whining already!

The authors then go on in a slightly defensive tone about the fact they had to resort to a commercial IRB. I understand this and have heard the criticisms of such Pay-for-IRB-oversight entities. From my perspective this is much, much lesser of a concern. The absolute key is to obtain some oversight that is independent of the research team. That is first-principles stuff to my view. They also attempt to launch a discussion of whether novel approaches to IRB oversight and approvals need to be created to deal with citizen-science and crowd-funded projects. I congratulate them on this and totally agree that it needs to be discussed amongst that community.

What I do not appreciate is their excuse making. Admitting their error and seeking to generate new structures which satisfy the goal of independent oversight for citizen-science in the future is great. But all the prior whinging and excuse making, combined with the hairsplitting over legal requirements, severely undercuts progress. That aspect of their argument is telling their community that the traditional institutional approaches do not apply to them.

This is wrong.

UPDATE: Read uBiome is determined to be a cautionary tale for citizen science over at thebrokenspoke blog.
__
*orally. not sure civilians can get a legal syringe needle anywhere.

**(the global crowdfund 'they')

Additional Reading:

Animals in Research: The conversation begins
Animals in Research: IACUC Oversight

Animals in Research: Guide for the Care and Use of Laboratory Animals

Animals in Research: Mice and Rats and Pigeons...Oh My!
Virtual IACUC: Reduction vs. Refinement
Animals in Research: Unnecessary Duplication

34 responses so far

Grant pressure amps up the scientific cheating?

RetractionVsNIHsuccessWell this is provocative. One James Hicks has a new opinion bit in The Scientist that covers the usual ground about ethics, paper retractions and the like in the sciences. It laments several decades of "Responsible Conduct of Research" training and the apparent utter failure of this to do anything about scientific misconduct. Dr. Hicks has also come up with a very provocative and truthy graph. From the article it appears to plot annual data from 1997 to 2011 in which the retraction rate (from this Nature article) is plotted against the NIH Success Rate (from Science Insider).

Like I said, it appears truthy. Decreasing grant success is associated with increasing retraction rates. Makes a lot of sense. Desperate times drive the weak to desperate measures.

Of course, the huge caveat is the third factor.....time. There has been a lot more attention paid to scientific retractions lately. Nobody knows if increased retraction rates over time are being observed because fraud is up or because detection is up. It is nearly impossible to ever discover this. Since NIH grant success rates have likewise been plummeting as a function of Fiscal Year, the relationship is confounded.

19 responses so far

Scientific Priority, Shooting the Breeze and Integrity

You know how waving the word "integrity" around in a discussion of the quotidian practice of science works on Your Humble Narrator, right Dear Reader?

well, one @dr_beckie mused:

tweeting for networking (baby limits conference attendance), but snr colleague warned against talking to openly about my research (1/2)

and

in such an open forum, is it really so naive to have faith in scientific integrity? (2/2)

a little prodding brought forth this revelation:

@drugmonkeyblog not fear it would be disappointment. Integrity is acknowledging input ie the chat in the pub that gave you initial idea.

As I observed, her Acknowledgement sections must be a wonder to behold. Perhaps the first ever need for Supplemental Acknowledgements?

Now of course I cannot possibly know the full subtlety of dr_beckie's views on scientific priority and the "integrity" of differing thresholds for formal acknowledgement of input from scientific peers. But I do know there is an awful lot of wackaloon delusion out there about these issues.

So let me say this. A failure to appreciate that your sub(sub, sub)field of science is overladen with bushels of extremely smart, well trained and motivated individuals who are reading the exact same published literature that you are is not evidence of a lack of "integrity" in the field. If you had some brilliant idea or synthesis, odds are very good that someone else has the exact same idea.

This is why I take an exceptionally skeptical view of claims that so-and-so "stole" the ideas of some other scientist.

I am not saying intellectual theft doesn't occur in science. I am confident it does. Someone taking the ideas expressed by another, that they have not yet arrived at, and managing to reach the threshold of academic credit (a paper authorship, usually) with that idea without properly crediting the original person. Somewhere below this is a vast, vast territory of normal scientific operation in which "theft" is not really appropriate.

Chats in the pub, discussions at meeting presentations and thoughts expressed at lab meeting do not all deserve formal Acknowledgement. If these roots of a scientific paper were accurately recorded, I'm not kidding that the Acknowledgement section would go on for pages. Clearly, this section is not intended to cover all possible casual interactions that led up to the clicks in your brain that crystallized into a scientific Idea. There is a threshold.

I guarantee you that there are almost as many opinions about this precise threshold as their are scientists who are publishing. Multiplied by two, in fact, because I feel confident that any given scientist will have a different standard for crediting some other loser colleague versus when they see it appropriate that their own brilliant thoughts receive proper attribution!

Now we come around to the original Twitt and @dr_beckie's concern that discussing her work online involves concerns about scientific integrity when it comes to proper acknowledgement, presumably, of her brilliant 140 character contributions to her subfield. Acknowledgement, one assumes, in published papers down the road.

I am not seeing where there is any specific concern. All that differs here is the potential size of the audience...but recall that really it is only participants in a scientific subfield that matter. So you could have made the observation at a meeting during the question period. Or at your poster to several meeting attendees. Most of the time a normal scientist is not looking around the meeting room trying to gauge the "integrity" of some 200 or 500 scientists before they ask their question or make their observation. Each and every one of these people who hear you might, if the notion strikes them, communicate your brilliance to other scientists who didn't happen to be in attendance for some reason. You have no control over this. Most of us rely, as @dr_beckie would have it, on the normal practices and "integrity" of our fields in these situations.

Furthermore, many of us realize the fundamental reality of science priority and scientific ideas. It doesn't matter who has the idea. What matters is who can conduct the experiments, interpret the data and publish the paper. This is the way science is credited. By. Producing.

Getting into he said/she said over who came up with an idea first? Nearly a complete waste of time. If you are really paranoid about these matters? STFU! Don't talk to anyone about "your" ideas. Fine. Whatever. But don't come whining around about "integrity" when the off-hand remark you made in the pub* seems to be a fundamental building block of a paper that appears a year later with the author lines including one of your drinking buddies!

Let me just note here that I've been around the block a few times myself. There are published papers out there where I got screwed out of an authorship (and even Acknowledgement) in a manner anyone at all would admit showed a lack of integrity. It is going to happen now and again. I deal. I move on. Against this background, a lack of "We'd really like to thank DrugMonkey for his random spewing at the pub one night late at the CPDD annual meeting" kinda pales. It isn't like your appearance in the "Acknowledgement" section carries any sort of weight or would be put on your CV or tenure package, right**?

For full disclosure, I'm sure I've published papers that someone else thinks should have included an Acknowledgement of their brilliant input. I know for a certain fact that a particular colleague of mine is pouty*** about not being an author on one particular paper. This person's position is that s/he expressed the "idea" before I did. Of course I remember the event quite clearly and this person is high as a kite...it was my idea. But guess what? Either of us could very well have said it out loud first. Easily. It was an obvious thing to do. I just said it first, outloud and with that particular person in hearing's distance. It is pathetic for me to claim that the idea was a result of my unique brilliance.

Now as chance would have it I was the one who actually did the study and published it; my colleague did not. I will note that this colleague and I probably talked about dozens of ideas that could have been, later became or may yet become papers back in the day. Hell, we still talk about many ideas that could/may/will become papers.

Back to Twitter.

It strikes me that there is one nasty little implication here, one that I think pervades a lot of the rationale of these OpenScience and WeNeedCreditForBlogging!!!11!! types. They are trying to get credit for "having the idea" when they do not deserve it and should not deserve it. I don't blog about actual science all that much but I've done it now and then. I'm pretty sure in a handful of such posts I've made observations or expressed curiosity about matters that could possibly be addressed in the field by a publication or two. Just like I've expressed observations or curiosity, IRL, in 1) poster sessions, 2) platform presentations, 3) shooting the shit with colleagues, 4) grant reviews, 5) paper reviews, 6) lab meetings and other places.

I don't expect credit. I do not assume as a default that papers that come out later that can be six-degrees-of-Kevin-Bacon connected to my blathering must have stolen my ideas. It is nice to receive an Acknowledgement if the authors believe it is appropriate. Sure. Everyone loves that. But I'm not on the barricades screaming about "integrity" if it doesn't happen.

Life is too short.

And I have science to publish.
__
*for all you know, the drinking buddies are all "Oh shit, I better k3rn my postdoc! Some lame-brain in doc_beckie's group finally thought up the thought we've been working on for six months....we're gonna get scooped!!!"

**Please tell me I'm right.

***This is not infrequently in a context in which this person may be trying to get me to buy the next round, FWIW.

4 responses so far

David Nichols on the impact of his scientific work with recreational drug users

Professor David E. Nichols is a legend for scientists who are interested in the behavioral pharmacology of 3,4-methylenedioxymethamphetamine (MDMA, aka, 'Ecstasy'). If you look carefully at many of the earlier papers (and some not-so-early) you will see that people obtained their research supply of this drug from him. As well as much of their background knowledge from publications he has co-authored. He has also worked on a number of other compounds which manipulate dopaminergic and/or serotonergic neurotransmission, some of which are of great interest to those in the recreational user community who seek (ever and anon) new highs, particularly ones that might be similar to their favorite illicit drugs but that may not currently be controlled. Those who are interested in making money supplying the recreational consumer population are particularly interested in the latter, of course.

Professor Nichols has published a recent viewpoint in Nature in which he muses on the uses to which some of his work has been put:

A few weeks ago, a colleague sent me a link to an article in the Wall Street Journal. It described a "laboratory-adept European entrepreneur" and his chief chemist, who were mining the scientific literature to find ideas for new designer drugs — dubbed legal highs. I was particularly disturbed to see my name in the article, and that I had "been especially valuable" to their cause. I subsequently received e-mails saying I should stop my research, and that I was an embarrassment to my university.

I have never considered my research to be dangerous, and in fact hoped one day to develop medicines to help people.

As with most scientists, I have little doubt. And ultimately, I agree with his observation that

There really is no way to change the way we publish things, although in one case we did decide not to study or publish on a molecule we knew to be very toxic. I guess you could call that self-censure. Although some of my results have been, shall we say, abused, one cannot know where research ultimately will lead. I strive to find positive things, and when my research is used for negative ends it upsets me.

It is unfortunate that Professor Nichols has been put in this position. Undoubtedly John Huffman of JWH-018 fame (one of the more popular synthetic full-agonist cannabinoids sprayed on herbal incense products) feels much the same about his own work. But I suppose this is the risk that is run with many lines of basic and pre-clinical work. Not just recreational drug use but even therapeutic use- after all off-label prescribing has to start somewhere. And individual health (or do I mean "health") practices such as high-dosing on blueberries or cranberries, various so-called "nutritional supplements", avoiding certain foods, exercise regimes, diets, etc may be based on no more than a single scientific paper, right?

So we should all feel some bit of Professor Nichols' pain, even if our own work hasn't been mis-used or over-interpreted...yet.

UPDATE: Thoughts from David Kroll over at the cenblog home of Terra Sigillata.

2 responses so far

On negative and positive data fakery

Aug 22 2010 Published by under Ethics, Science Ethics, Scientific Misconduct

A comment over at the Sb blog raises an important point in the Marc Hauser misconduct debacle. Allison wrote:

Um, it would totally suck to be the poor grad student /post doc IN HIS LAB!!!!

He's RUINED them. None of what they thought was true was true; every time they had an experiment that didn't work, they probably junked it, or got terribly discouraged.

This is relevant to the accusation published in the Chronicle of Higher Education:

the experiment was a bust.

But Mr. (sic) Hauser's coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. "I don't feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder," he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it.

So far as we've been able to tell from various reports, the misconduct charges are related to making up positive results. This is common...it sounds quite similar to a whole lot of other scientific misconduct cases. Making up "findings" which are in fact not supported by a positive experimental result.

The point about graduate students and postdocs that Allison raised, however, push me in another direction. What about when a PI sits on, or disparages, perfectly good data because it does not agree with his or her pet hypothesis? "Are you sure?", the PI asks, "Maybe you better repeat it a few more times...and change the buffer concentrations to this while you are at it, I remember from my days at the bench 20 yrs ago that this works better". For video coding of a behavioral observations study, well, there are all kinds of objections to be raised. Starting with the design, moving on to the data collection phase (there is next to nothing that is totally consistent and repeatable in a working animal research vivarium across many days or months) and ending with the data analysis.

Pretty easy to question the results of the new trainee and claim that "Well, Dr. Joe Blow, our last post-doc didn't have any trouble with the model, perhaps you did something wrong".

Is this misconduct? The scientist usually has plenty of work that could have ended up published, but circumstances have decided otherwise. Maybe it just isn't that exciting. Maybe the project got partially scooped and the lab abandoned a half a paper's worth of work. Perhaps the results are just odd, the scientist doesn't know what to make of it and cannot sustain the effort to run eight more control studies that are needed to make sense of it.

None of this is misconduct in my view. This is the life of a scientist who has limited time and resources and is looking to publish something that is exciting and cool instead of something that appears to be pedestrian or derivative.

I think it would be pretty hard to make a misconduct case over quashing experiments. Much easier to make the case over positive fraud than it is to make the case over negative fraud.

As you know, this is something that regulatory authorities are trying to address with human clinical trials by requiring the formal recording of each one. Might it bring nonclinical research to a crashing halt if every study had to be reported/recorded in some formal way for public access? Even if this amounted to making available lab books and raw, unanalyzed data I can see where this would have a horrible effect on the conduct of research. And really, the very rarity of misconduct does not justify such procedures. But I do wonder if University committees tasked with investigation fraud even bother to consider the negative side of the equation.

I wonder if anyone would ever be convicted of fraud for not publishing a study.

14 responses so far

Harvard announces Hauser was found guilty of misconduct, NIH ORI notified

ScienceInsider has published a letter from Harvard Dean of the Faculty of Arts and Sciences, Michael Smith, addressed to his faculty.

it is with great sadness that I confirm that Professor Marc Hauser was found solely responsible, after a thorough investigation by a faculty investigating committee, for eight instances of scientific misconduct under FAS [Faculty of Arts and Sciences] standards.

The dean notes that their internal inquiry is over but that there are ongoing investigations from the NIH and NSF. So my curiosity turns to Hauser's NIH support- I took a little stroll over to RePORTER.

From 1997 to 2009 there are nine projects listed under the P51RR000168 award which is the backbone funding for the New England Primate Research Center, one of the few places in which the highly endangered cotton top tamarin is maintained for research purposes. The majority of the projects are titled "CONCEPTUAL KNOWLEDGE AND PERCEPTION IN TAMARINS". RePORTER is new and the prior system, CRISP, did not link the amounts but you can tell from the most recent two years that these are small projects amounting to $50-60K.

Hauser appears to have only had a single R01 "Mechanisms of Vocal Communication" (2003-2006).

Of course we do not know how many applications he may have submitted that were not selected for funding and, of course, ORI considers applications that have been submitted when judging misconduct and fraud, not just the funded ones. One of the papers that has been retracted was published in 2002 so the timing is certainly such that there could have been bad data included in the application.

The P51 awards offer a slight twist. I'm not totally familiar with the system but it would not surprise me if this backbone award to the Center, reviewed every 5 years, only specified a process by which smaller research grants would be selected by a non-NIH peer review process. Perhaps it is splitting hairs but it is possible that Hauser's subprojects were not reviewed by the NIH. There may be some loopholes here.

Wandering over to NSF's Fastlane search I located 10 projects on which Hauser was PI or Co-PI. This is where his big funding has been coming from, apparently. So yup, I bet NSF will have some work to do in evaluating his applications to them as well.

This announcement from the Harvard Dean is just the beginning.

9 responses so far

A Discussion of Wrongdoing and Punishment

I overheard an interesting conversation recently between Associate Professor Tobias Keith and longstanding Academy member, Eunice E Schnizzlezwick Chair and University System Professor William Nelson. It went something like this...

Associate Professor Keith:

Well a man come on the 6 o'clock news
Said somebody's been shot, somebody's been abused
Somebody blew up a building
Somebody stole a car
Somebody got away
Somebody didn't get too far yeah
They didn't get too far

Phew. A whole lot of scientific badness out there. What shall we do folks?

University System Professor Nelson:

Grandpappy told my pappy, back in my day, son
A man had to answer for the wicked that he done
Take all the rope in Texas
Find a tall oak tree, round up all of them bad boys
Hang them high in the street for all the people to see that

Um, kinda severe eh? Well, there are definitely bad consequences of academic fraud and scientific misconduct. After all....

Professors Nelson and Keith:

Justice is the one thing you should always find
You got to saddle up your boys
You got to draw a hard line
When the gun smoke settles we'll sing a victory tune
We'll all meet back at the local saloon
We'll raise up our glasses against evil forces
Singing whiskey for my men, beer for my horses

If we are going to take Janet's Tribe of Science formulation seriously, perhaps we do need to saddle up our boys and girls.

4 responses so far

Why Supposed Ethics Case Studies / Training Scenarios Are Idiotic

Aug 16 2010 Published by under Animals in Research, Ethics, Science Ethics

Dr. Isis has a post up responding to a Protocol Review question "Noncompliance in survival surgery technique" published in Lab Animal [2010; 39(8)] by Jerald Silverman, DVM. His column is supposed to be in the vein of practicum case studies that are a traditional part of the discussion of ethical issues. Given X scenario, how should person A act? What is the ethical course of action? Was there a violation? Should it be reported/evaluated/punished.

We see these sorts of examples all the time in the ethics training courses to which we subject our academic trainees, particularly graduate students and postdocs.

These exercises frequently annoy me and this IACUC / Animals-in-Research question is of the classic type. Continue Reading »

7 responses so far

Reading the Coverage of a Retraction: Failure to replicate is not evidence of fraud

Aug 10 2010 Published by under Ethics, Science Ethics, Science Publication

The Twitts are all atwitter today about a case of academic misconduct. As reported in the Boston Globe:

Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.

The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.

There is an ongoing investigation and other allegations or admissions of scientific misconduct or fraud. More observations from the Nature blog The Great Beyond and NeuroSkeptic. We'll simply have to see how that plays out. I have a few observations on the coverage so far however. Let's start with the minor ones.

The PubMed page and the ScienceDirect publishers page have no indication that this paper has been retracted. I did a quick search for retraction, for Hauser and for tamarin on the ScienceDirect site and did not find any evidence of a published retraction notice by this method either. The Boston Globe article is datelined today but still. You would think that the publishers would have been informed of this situation loooong before it went public and would have the retraction linkage ready to roll.

The accusation in the paper correction by Hauser is, as is traditional, that the trainee faked it. As NeuroSkeptic points out, the overall investigation spans papers published well beyond the trainee in question's time in the lab. Situations like this start posing questions in my mind about the tone and tenor of the lab and how that might influence the actions of a trainee. Not saying misconduct can't be the lone wolf actions of a single bad apple. I'm sure that happens a lot. But I am equally sure that it is possible for a PI to set a tone of, let us say, pressure to produce data that point in a certain direction.

What really bothered me about the Globe coverage was this, however. They associate a statement like this one:

In 2001, in a study in the American Journal of Primatology, Hauser and colleagues reported that they had failed to replicate the results of the previous study. The original paper has never been retracted or corrected.

with

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

In 1997, he co-authored a critique of the original paper, and Hauser and a co-author responded with a defense of the work.

What I am worried about in this type of coverage is the conflation of a failure to replicate a study with the absence of evidence (per the retraction blaming a trainee) with scientific debate over the interpretation of data.

The mere failure of an investigation to be able to replicate a prior one is not in and of itself evidence of scientific misconduct. Scientific findings, legitimate ones, can be difficult or impossible to replicate for many reasons and even if we criticize the credulity, scientific rigor or methods of the original finding, it is not misconduct. (Just so long as the authors report what they did and what they found in a manner consistent with the practices of their fields and the journals in which their data are published.) Even the much vaunted p<0.05 standard means that we recognize that 5 times out of a hundred experiments we are going to accept chance events as a causal chain resulting from our experimental manipulation.

Similarly, debates over what behavioral observation researchers think they see in animal behavior is not in and of itself evidence of misconduct. I mean, sure, if nobody other than the TruBelievers can ever see any smidge of evidence of the Emperor's fine clothes in the videotapes proffered as evidence by a given lab, we might write them off as cranks. But this is, at this point, most obviously a debate about research design, controls, alternative hypothesis and potential confounds in the approach. The quote from Gordon Gallup taken in greater isolation (as in The Great Beyond blog entry) makes it sound more like perhaps he's a disinterested party brought in as part of the investigation of scientific fraud when used in this context. In fact he appears to be a regular scientific critic of Hauser's work. Gallup might be right, but I don't like the way scientific debate is being conflated with scientific misconduct in this way.

Additional reading:
Harvard Magazine
Retraction Watch: including the text of the retraction to be published and a comment on Hauser serving as associate editor at the journal when his paper was handled.
Neuron Culture
John Hawks Weblog
melodye at Child's Play
New York Times
New Scientist
__
Disclaimer: I come from a behaviorist tradition and and am more than a little skeptical of research traditions in the comparative cognition tradition that Hauser inhabits.

24 responses so far

How To Read A Retraction, Gazfuckbajillion!!11!

Aug 04 2010 Published by under Ethics, Science Ethics, Scientific Misconduct

Nice one in PNAS today:

Retraction for “HOS10 encodes an R2R3-type MYB transcription factor essential for cold acclimation in plants” by Jianhua Zhu, Paul E. Verslues, Xianwu Zheng, Byeong-ha Lee, Xiangqiang Zhan, Yuzuki Manabe, Irina Sokolchik, Yanmei Zhu, Chun-Hai Dong, Jian-Kang Zhu, Paul M. Hasegawa, and Ray A. Bressan, which appeared in issue 28, July 12, 2005, of Proc Natl Acad Sci USA (102:9966–9971; first published online July 1, 2005; 10.1073/pnas.0503960102).

The authors wish to note the following: “The locus AT1g35515 that was claimed to be responsible for the cold sensitive phenotype of the HOS10 mutant was misidentified. The likely cause of the error was an inaccurate tail PCR product coupled with the ability of HOS10 mutants to spontaneously revert to wild type, appearing as complemented phenotypes. The SALK alleles of AT1g35515 in ecotype Columbia could not be confirmed by the more reliable necrosis assay. Therefore, the locus responsible for the HOS10 phenotypes reported in ecotype C24 remains unknown. The other data reported were confirmed with the exception of altered expression of AT1g35515, which appears reduced but not to the extent shown in Zhu et al. The authors regrettably retract the article.” [Emphasis added]

Sounds like these fuckers were--at best--too happy to see the complementation support their hypothesis, and thus didn't do appropriate fucken controls, which would have revealed that the rate of complementation was exactly the same as the rate of spontaneous reversion. Or worse, there was some cherry picking of data going on. Also, it is pretty suspicious that--in addition to the bogus complementation data--there was also, coincidentally, altered expression of the same locus that was not confirmable after publication. Again, sounds like some cherry picking may have been going on.

Worst case scenario, all this shit was totally cherry picked data within the normal range of variability and ginned up into a totally fake fucken story.

6 responses so far

« Newer posts