Archive for the 'Ethics' category

The "absolute monarch" of Penn State University

Nov 09 2011 Published by under #FWDAOTI, Education, Ethics

If you want to understand the child molestation case that has rocked Penn State University in full, you need to read PhysioProf's take on the matter.

Joe Paterno–who has been the head coach for 46 years is the absolute monarch of that program, with absolute power. Regardless of whether he satisfied the bare minimum of legal requirements to report what he knew about the rape of children to his “superiors”–which as absolute monarch at Penn State, he really had none

emphasis added, but not really needed.

Go Read.

2 responses so far

What does a retracted paper mean?

crossposting from Scienceblogs.

I've been having a little Twitt discussion with Retraction Watch honcho @ivanoransky over a recent post in which they discuss whether a failure to replicate a result justifies a retraction.

Now, Ivan Oransky seemed to take great umbrage to my suggestion in a comment that there was dereliction in their duty to science to intentionally conflate a failure to replicate with intentional fraud. Per usual, we boiled it down to a fundamental disagreement over connotation. What it means to the average person to see that a paper is retracted.

I rely upon my usual solution, DearReader. Select all choices that apply when you see a retraction or that you think should induce a retraction.

 

Direct link to the poll in case you can't see it.

My position can be found after the jump.... Continue Reading »

16 responses so far

Letters of Rec

Jun 02 2011 Published by under Careerism, Day in the life of DrugMonkey, Ethics

If you write a letter of recommendation for an undergrad in your class that you barely know you are just diluting the value of such letters. Those who are deserving are having their efforts obscured by this practice.

Discuss.

41 responses so far

On negative and positive data fakery

Aug 22 2010 Published by under Ethics, Science Ethics, Scientific Misconduct

A comment over at the Sb blog raises an important point in the Marc Hauser misconduct debacle. Allison wrote:

Um, it would totally suck to be the poor grad student /post doc IN HIS LAB!!!!

He's RUINED them. None of what they thought was true was true; every time they had an experiment that didn't work, they probably junked it, or got terribly discouraged.

This is relevant to the accusation published in the Chronicle of Higher Education:

the experiment was a bust.

But Mr. (sic) Hauser's coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. "I don't feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder," he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it.

So far as we've been able to tell from various reports, the misconduct charges are related to making up positive results. This is common...it sounds quite similar to a whole lot of other scientific misconduct cases. Making up "findings" which are in fact not supported by a positive experimental result.

The point about graduate students and postdocs that Allison raised, however, push me in another direction. What about when a PI sits on, or disparages, perfectly good data because it does not agree with his or her pet hypothesis? "Are you sure?", the PI asks, "Maybe you better repeat it a few more times...and change the buffer concentrations to this while you are at it, I remember from my days at the bench 20 yrs ago that this works better". For video coding of a behavioral observations study, well, there are all kinds of objections to be raised. Starting with the design, moving on to the data collection phase (there is next to nothing that is totally consistent and repeatable in a working animal research vivarium across many days or months) and ending with the data analysis.

Pretty easy to question the results of the new trainee and claim that "Well, Dr. Joe Blow, our last post-doc didn't have any trouble with the model, perhaps you did something wrong".

Is this misconduct? The scientist usually has plenty of work that could have ended up published, but circumstances have decided otherwise. Maybe it just isn't that exciting. Maybe the project got partially scooped and the lab abandoned a half a paper's worth of work. Perhaps the results are just odd, the scientist doesn't know what to make of it and cannot sustain the effort to run eight more control studies that are needed to make sense of it.

None of this is misconduct in my view. This is the life of a scientist who has limited time and resources and is looking to publish something that is exciting and cool instead of something that appears to be pedestrian or derivative.

I think it would be pretty hard to make a misconduct case over quashing experiments. Much easier to make the case over positive fraud than it is to make the case over negative fraud.

As you know, this is something that regulatory authorities are trying to address with human clinical trials by requiring the formal recording of each one. Might it bring nonclinical research to a crashing halt if every study had to be reported/recorded in some formal way for public access? Even if this amounted to making available lab books and raw, unanalyzed data I can see where this would have a horrible effect on the conduct of research. And really, the very rarity of misconduct does not justify such procedures. But I do wonder if University committees tasked with investigation fraud even bother to consider the negative side of the equation.

I wonder if anyone would ever be convicted of fraud for not publishing a study.

14 responses so far

Why Supposed Ethics Case Studies / Training Scenarios Are Idiotic

Aug 16 2010 Published by under Animals in Research, Ethics, Science Ethics

Dr. Isis has a post up responding to a Protocol Review question "Noncompliance in survival surgery technique" published in Lab Animal [2010; 39(8)] by Jerald Silverman, DVM. His column is supposed to be in the vein of practicum case studies that are a traditional part of the discussion of ethical issues. Given X scenario, how should person A act? What is the ethical course of action? Was there a violation? Should it be reported/evaluated/punished.

We see these sorts of examples all the time in the ethics training courses to which we subject our academic trainees, particularly graduate students and postdocs.

These exercises frequently annoy me and this IACUC / Animals-in-Research question is of the classic type. Continue Reading »

7 responses so far

Reading the Coverage of a Retraction: Failure to replicate is not evidence of fraud

Aug 10 2010 Published by under Ethics, Science Ethics, Science Publication

The Twitts are all atwitter today about a case of academic misconduct. As reported in the Boston Globe:

Harvard University psychologist Marc Hauser — a well-known scientist and author of the book “Moral Minds’’ — is taking a year-long leave after a lengthy internal investigation found evidence of scientific misconduct in his laboratory.

The findings have resulted in the retraction of an influential study that he led. “MH accepts responsibility for the error,’’ says the retraction of the study on whether monkeys learn rules, which was published in 2002 in the journal Cognition.

There is an ongoing investigation and other allegations or admissions of scientific misconduct or fraud. More observations from the Nature blog The Great Beyond and NeuroSkeptic. We'll simply have to see how that plays out. I have a few observations on the coverage so far however. Let's start with the minor ones.

The PubMed page and the ScienceDirect publishers page have no indication that this paper has been retracted. I did a quick search for retraction, for Hauser and for tamarin on the ScienceDirect site and did not find any evidence of a published retraction notice by this method either. The Boston Globe article is datelined today but still. You would think that the publishers would have been informed of this situation loooong before it went public and would have the retraction linkage ready to roll.

The accusation in the paper correction by Hauser is, as is traditional, that the trainee faked it. As NeuroSkeptic points out, the overall investigation spans papers published well beyond the trainee in question's time in the lab. Situations like this start posing questions in my mind about the tone and tenor of the lab and how that might influence the actions of a trainee. Not saying misconduct can't be the lone wolf actions of a single bad apple. I'm sure that happens a lot. But I am equally sure that it is possible for a PI to set a tone of, let us say, pressure to produce data that point in a certain direction.

What really bothered me about the Globe coverage was this, however. They associate a statement like this one:

In 2001, in a study in the American Journal of Primatology, Hauser and colleagues reported that they had failed to replicate the results of the previous study. The original paper has never been retracted or corrected.

with

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

In 1997, he co-authored a critique of the original paper, and Hauser and a co-author responded with a defense of the work.

What I am worried about in this type of coverage is the conflation of a failure to replicate a study with the absence of evidence (per the retraction blaming a trainee) with scientific debate over the interpretation of data.

The mere failure of an investigation to be able to replicate a prior one is not in and of itself evidence of scientific misconduct. Scientific findings, legitimate ones, can be difficult or impossible to replicate for many reasons and even if we criticize the credulity, scientific rigor or methods of the original finding, it is not misconduct. (Just so long as the authors report what they did and what they found in a manner consistent with the practices of their fields and the journals in which their data are published.) Even the much vaunted p<0.05 standard means that we recognize that 5 times out of a hundred experiments we are going to accept chance events as a causal chain resulting from our experimental manipulation.

Similarly, debates over what behavioral observation researchers think they see in animal behavior is not in and of itself evidence of misconduct. I mean, sure, if nobody other than the TruBelievers can ever see any smidge of evidence of the Emperor's fine clothes in the videotapes proffered as evidence by a given lab, we might write them off as cranks. But this is, at this point, most obviously a debate about research design, controls, alternative hypothesis and potential confounds in the approach. The quote from Gordon Gallup taken in greater isolation (as in The Great Beyond blog entry) makes it sound more like perhaps he's a disinterested party brought in as part of the investigation of scientific fraud when used in this context. In fact he appears to be a regular scientific critic of Hauser's work. Gallup might be right, but I don't like the way scientific debate is being conflated with scientific misconduct in this way.

Additional reading:
Harvard Magazine
Retraction Watch: including the text of the retraction to be published and a comment on Hauser serving as associate editor at the journal when his paper was handled.
Neuron Culture
John Hawks Weblog
melodye at Child's Play
New York Times
New Scientist
__
Disclaimer: I come from a behaviorist tradition and and am more than a little skeptical of research traditions in the comparative cognition tradition that Hauser inhabits.

24 responses so far

How To Read A Retraction, Gazfuckbajillion!!11!

Aug 04 2010 Published by under Ethics, Science Ethics, Scientific Misconduct

Nice one in PNAS today:

Retraction for “HOS10 encodes an R2R3-type MYB transcription factor essential for cold acclimation in plants” by Jianhua Zhu, Paul E. Verslues, Xianwu Zheng, Byeong-ha Lee, Xiangqiang Zhan, Yuzuki Manabe, Irina Sokolchik, Yanmei Zhu, Chun-Hai Dong, Jian-Kang Zhu, Paul M. Hasegawa, and Ray A. Bressan, which appeared in issue 28, July 12, 2005, of Proc Natl Acad Sci USA (102:9966–9971; first published online July 1, 2005; 10.1073/pnas.0503960102).

The authors wish to note the following: “The locus AT1g35515 that was claimed to be responsible for the cold sensitive phenotype of the HOS10 mutant was misidentified. The likely cause of the error was an inaccurate tail PCR product coupled with the ability of HOS10 mutants to spontaneously revert to wild type, appearing as complemented phenotypes. The SALK alleles of AT1g35515 in ecotype Columbia could not be confirmed by the more reliable necrosis assay. Therefore, the locus responsible for the HOS10 phenotypes reported in ecotype C24 remains unknown. The other data reported were confirmed with the exception of altered expression of AT1g35515, which appears reduced but not to the extent shown in Zhu et al. The authors regrettably retract the article.” [Emphasis added]

Sounds like these fuckers were--at best--too happy to see the complementation support their hypothesis, and thus didn't do appropriate fucken controls, which would have revealed that the rate of complementation was exactly the same as the rate of spontaneous reversion. Or worse, there was some cherry picking of data going on. Also, it is pretty suspicious that--in addition to the bogus complementation data--there was also, coincidentally, altered expression of the same locus that was not confirmable after publication. Again, sounds like some cherry picking may have been going on.

Worst case scenario, all this shit was totally cherry picked data within the normal range of variability and ginned up into a totally fake fucken story.

6 responses so far

On Conflicts of Interest

Jul 13 2010 Published by under #FWDAOTI, Conduct of Science, Ethics

There's a quote that will show up on the rotator over there on the left that I found at Ed Brayton's place. It reflects the confusion that the reasonable heterosexual man typically feels over the (US) right-wing idealogue talking points about "making" people gay. You know, by extending them rights, admitting that they exist, refusing to bash them, etc, the social fabric is apparently constructing gay people out of heterosexual cloth. This rejoinder is pitch perfect.

I'm not going to say that all homophobes are closeted homosexuals. I just want to point out that anyone who thinks social pressure is all that keeps straight men from forsaking women to pursue other men has no idea what it's like to be a straight man.

I have a similar response to people like Psi Wavefunction who write:

That is, your results should probably be of a kind that would encourage further funding in your field. Presumably, if you get funding for environmental topics, you'd be better off with results stating your Cute Fluffy Animal is on the brink of extinction rather than 'oh it's doing fine'. In that particular case, who the hell is going to dump more money into Cute Fluffy Animal research if it's not under some sort of threat? Conflict of interests much?

What? Okay, beyond the point of whether scientists might actually believe that Cute Fluffy Animals are on the brink of extinction based on their research and that of their subfield, we have the usual bullshit allegation that scientists just go out and "prove" what their funding agencies want to hear.
It makes me wonder, if a person really believes this, whether they have any idea what it actually means to be a scientist. Now in the case of my usual opponents from the legalize-eet perspective, agreed, they don't know what it means to be a scientist because they are not scientists. No worries, we should probably shoulder the task of explaining to them how our lives work. For someone who appears to fancy themselves a science blogger though? hmm.

Even blogging about research papers is sensitive, especially within your own field. You have to balance opinion, factual accuracy and style without offending the authors. Some bloggers find it perfectly sensible to unleash a tirade against some paper they don't like, but I'd prefer not to sever potential relationships with people I've never met, even if I do think their paper is a piece of crap. Primarily for selfish reasons: at this point, I'm in no position to start collecting enemies in academia. Or anywhere, really.
If I were a truly independent blogger, that wouldn't fucking matter, and I'd probably make a point of devouring every crappy paper I come across for shits and giggles.

So 1) speak for yourself and 2) what is UP with these people who assert what nasty nefarious behavior they would get up to if only they had some cover? Seriously?
This ties into the usual allegations from out-bloggers about pseud-bloggers. This unproven assertion that all this nasty id-based behavior is almost impossible to resist, save the social embarrassment of providing one's own name.
If this is what you really believe then you have no idea what it means to have an intrinsic professional, moral and ethical center.

20 responses so far

Conflict for NIH Funded Bloggers

Jul 07 2010 Published by under Blogging, Ethics

Greg Laden raises a decent question* amid the PepsiBlogs kerfuffle what with his reflexive need to take potshots at his perceived blog enemies and all .

Somewhere in the middle are blogs written by scientists at MRU's who are mostly funded by some major single source (NIH, Big Pharm, ... maybe even Pepsico???) but who, since they are either indy or pseudo, are different than a corporate sponsored blog.

I'm pretty sure I'm the blogger that takes the most heat for NIH funding conflict of interest, because of my topic domain. I'll have to dredge up the links later because they are not overwhelmingly common.
The charge comes from people who don't like my comments about the possible health risks of recreational drugs, most typically when I am talking about cannabis. It comes in two basic flavors.

Continue Reading »

14 responses so far

More evidence that the NIH has no interest in curbing real COI

Jun 08 2010 Published by under Ethics, NIH, Scientific Misconduct

At all.
ScienceInsider overviewed a dismal story being reported by The Chronicle of Higher Education. It involves a tale I've discussed before with a new twist. ScienceInsider:

In 2008, a Senate investigation found that Nemeroff failed to report at least $1.2 million of more than $2.4 million that he had received for consulting for drug companies. NIH suspended one of Nemeroff's grants, and in December 2008, Emory announced that it would not allow Nemeroff to apply for NIH grants for 2 years.

As I was just saying, this is the scope of the real problem. Changing the reporting rules from $10K per year to $5K per year does absolutely nothing about a guy who fails to report some or all of his outside activity.
Still, a 2 year suspension sounds like something, doesn't it?

Continue Reading »

11 responses so far

« Newer posts Older posts »