I think that at some point, protracted refusal to cite relevant work amounts to scientific misconduct.
Archive for the 'Ethics' category
There is one thing that concerns me about the Journal of Neuroscience banning three authors from future submission in the wake of a paper retraction.
One reason you might seek to get harsh with some authors is if they have a track record of corrigenda and errata supplied to correct mistakes in their papers. This kind of pattern would support the idea that they are pursuing an intentional strategy of sloppiness to beat other competitors to the punch and/or just don't really give a care about good science. A Journal might think either "Ok, but not in our Journal, chumpos" or "Apparently we need to do something to get their attention in a serious way".
There is another reason that is a bit worrisome.
One of the issues I struggle with is the whisper campaign about chronic data fakers. "You just can't trust anything from that lab". "Everyone knows they fake their data."
I have heard these comments frequently in my career.
On the one hand, I am a big believer in innocent-until-proven-guilty and therefore this kind of crap is totally out of bounds. If you have evidence of fraud, present it. If not, shut the hell up. It is far to easy to assassinate someone's character unfairly and we should not encourage this for a second.
I can't find anything on PubMed that is associated with the last two authors of this paper in combination with erratum or corrigendum as keywords. So, there is no (public) track record of sloppiness and therefore there should be no thought of having to bring a chronic offender to task.
On the other hand, there is a lot of undetected and unproven fraud in science. Just review the ORI notices and you can see just how long it takes to bust the scientists who were ultimately proved to be fraudsters. The public revelation of fraud to the world of science can be many years after someone first noticed a problem with a published paper. You also can see that convicted fraudsters have quite often continued to publish additional fraudulent papers (and win grants on fraudulent data) for years after they are first accused.
I am morally certain that I know at least one chronic fraudster who has, to date, kept one step ahead of the
long short and ineffectual arm of the ORI law despite formal investigation. There was also a very curious case I discussed for which there were insider whispers of fraud and yet no findings that I have seen yet.
This is very frustrating. While data faking is a very high risk behavior, it is also a high reward behavior. And the risks are not inevitable. Some people get away with it.
I can see how it would be very tempting to enact a harsh penalty on an otherwise mild pretext for those authors that you suspected of being chronic fraudsters.
But I still don't see how we can reasonably support doing so, if there is no evidence of misconduct other than the rumor mill.
The Journal of Neuroscience has received notification of an investigation by the Perelman School of Medicine at the University of Pennsylvania, which supports the journal's findings of data misrepresentation in the article “Intraneuronal APP, Not Free Aβ Peptides in 3xTg-AD Mice: Implications for Tau Versus Aβ-Mediated Alzheimer Neurodegeneration” by Matthew J. Winton, Edward B. Lee, Eveline Sun, Margaret M. Wong, Susan Leight, Bin Zhang, John Q. Trojanowski, and Virginia M.-Y. Lee, which appeared on pages 7691–7699 of the May 25, 2011 issue. Because the results cannot be considered reliable, the editors of The Journal are retracting the paper.
From RetractionWatch we learn that the Journal has also issued a submission ban to three of the authors:
According to author John Trojanowski ... he and Lee have been barred from publishing in Journal for Neuroscience for several years. Senior author Edward Lee is out for a year.
This is the first time I have ever heard of a Journal issuing a ban on authors submitting papers to them. This is an interesting policy.
If this were a case of a conviction for academic fraud, the issues might be a little clearer. But as it turns out, it is a very muddy case indeed.
A quote from the last author:
In a nut shell, Dean Glen Gaulton asserted that the findings in the paper were correct despite mistakes in the figures. I suggested to J. Neuroscience that we publish a corrigendum to clarify these mistakes for the readership of J Neuroscience
The old "mistaken figures" excuse. Who, might we ask is at fault?
RetractionWatch quotes the second-senior author Trojanowski:
Last April, we got an email about an inquiry into figures that I would call erroneously used. An error was made by [first author] Matt Winton, who was leaving science and in transition between Penn and his new job. He was assembling the paper to submit it, there were several iterations of the paper. One set of figures was completely correct – I still don’t know what happened, but he got the files mixed up, and used erroneous figures
Winton has apparently landed a job as a market analyst*, providing advice to investors on therapeutics for Alzheimer's Disease. Maybe the comment from Trojanowski is true and he was in a rush to get the paper off his desk as he started the new job**. Maybe. Maybe there is all kinds of blame to go around and the other authors should have caught the problem.
Or maybe this was one of those deliberate frauds in which someone took shortcuts and represented immunohistochemical images or immunoblots as something they were not. The finding from the University's own investigation appears to confirm, however, that a legitimate mistake was made.
...so let us assume it was all an accident. Should the paper be retracted? or corrected?
I think there are two issues here that support the Journal's right to retract the paper.
We cannot ignore that publication of a finding first has tremendous currency in the world of academic publishing. So does the cachet of publishing in one Journal over another. If a set of authors are sloppy about their manuscript preparation, provide erroneous data figures and they are permitted to "correct" the figures, they gain essentially all the credit. Potentially taking credit for priority or a given Journal level away from another group that works more carefully.
Since we would like authors to take all the care they possibly can in submitting correct data in the first place, it makes some sense to take steps to discourage sloppiness. Retraction is certainly one such discouragement. A ban on future submissions does seem, on the face of it, a bit harsh for a single isolated error. I might not opt for that if it were my decision. But I can certainly see where another scientist might legitimately want to bring down the ban hammer and I would be open to argument that it is necessary.
The second issue I can think of is related. It has to do with whether the paper acceptance was unfairly won by the "mistake". This is tricky. I have seen many cases in which even to the relatively uninformed viewer, the replacement/correct figure looks a lot crappier/dirtier/equivocal than the original mistaken image. Whether right or wrong that so-called "pretty" data change the correctness of the interpretation and strength of the support, it is often interpreted this way. This raises the question of whether the paper would have gained acceptance with the real data instead of the supposedly mistaken data. We obviously can't rewind history, but this theoretical concern should be easy to appreciate. Maybe the Journal of Neuroscience review board went through all of the review materials for this paper and decided that the faked figure sealed the acceptance? For this concern it really makes no difference to the Journal whether the mistake was unintentional or not, there is a strong argument that the integrity of its process requires retraction whenever there is significant doubt the paper would have been accepted without the mistaken image(s).
Given these two issues, I see no reason that the Journal is obligated to "abide by the Penn committee’s investigation" as Trojanowski appears to think they should be. The Journal could accept that it was all just a mistake and still have good reason to retract the paper. But again, a ban on further submissions from the authors seems a bit harsh.
Now, I will point out one thing in this scenario that chaps my hide. It is a frequent excuse of the convicted data faker that they were right, so all is well. RetractionWatch further quotes the senior author, Lee:
...the findings of this paper are extremely important for the Alzheimer’s disease field because it provided convincing evidence pointing out that a previous report claiming accumulation of intracellular Abeta peptide in a mouse model (3XFAD) is wrong (Oddo et al., Neuron 2003), as evidenced by the fact that this paper has been cited by others for 62 times since publication. Subsequent to our 2011 J. Neuroscience paper, others also have found no evidence of intracellular Abeta in the 3XFAD mice (e.g. Lauritzen et al., J. Neurosci, 2012).
I disagree that whether the figures are correct and/or repeatable is an issue that affects the decision here. You either have the correct data or you do not. You either submitted the correct data for review with the manuscript or you did not. Whether you are able to obtain the right data later, whether other labs obtain the right data or whether you had the right data in a mislabeled file all along is absolutely immaterial to whether the paper should be retracted.
The system itself is what needs to be defended. Because if you don't protect the integrity of the peer review system - where authors are presumed to be honest - then it encourages more sloppiness and more outright fraud.
*An interesting alt-career folks. One of my old grad school peeps has been in this industry for years and appears to really love it.
**I will admit, my eyebrows go up when the person being thrown under the bus for a mistake or a data fraud is someone who is no longer in the academic science publishing game and has very little to lose compared with the other authors.
Most laboratories buy stuff that they need to do their research. It varies. From latex gloves to pipette tips. From mice to bunnies. From cocaine to ABD-xld500BZN....whatever that is. Operant boxes to sequencers. Stuff.
All of these cost money which generally comes from the laboratory budgets. Startup, unattached funds if you have 'em and, for the most part research grants.
Consider this scenario.
We usually get our genotyping done outside of the lab. I mean, I could have this service performed in house by staff but there are many small vendors in my biotech/university/science community that will do it for us.
I met this guy at the bar. Or, maybe I recently ran into an old grad school friend. Some woman I postdoc'd with back in the day. A friend of my spouse. Whomever.
This person is starting up a brand new biotech support company, mom-and-pop kind of thing. This GenesRUs company is happy to take over our genotyping services.
I secure a quote. Wow. Two times the most expensive bottom line I came up with for doing it in-house that convinced me to hit the vendors in the first place. Maybe 3X the price of other locals.
But. But. This person is so nice. And we have a personal connection of some sort. Gee, they are still so small that they will come pick up from us at basically any time we want? And have results back prontissimo?
And you know. I HAVE the grant money. It isn't going to kill our budget to dump a few extra thousands on this top-cost option every year. Even if it amounts to tens of thousands, hey, it's just grant money, right?
The question, Dear Reader, is this.
Is it okay for me to use my PI's prerogative to spend my grant money this way? Just because I want to?
Well this is interesting. After being spanked by the FDA for selling their services without proper review and approval of their medical test (as the FDA interpreted it), the 23andme company is back.
I received an email spam suggesting I purchase one of their kits as a Mother's Day present.
Intrigued, I see this in an alert banner across the linked page.
23andMe provides ancestry-related genetic reports and uninterpreted raw genetic data. We no longer offer our health-related genetic reports. If you are a current customer please go to the health page for more information.
When you go to purchase a new kit you are obliged to check a box indicating you've read an additional warning.
I understand I am purchasing ancestry reports and uninterpreted raw genetic data from 23andMe for $99. I understand I will not receive any reports about my health in the immediate future, and there is no timeline as to which health reports might be available or when they might be available.
Ok. Got it.
So what about existing customers who purchased their kit in the old, pre-ban era? Guess I'd better visit that "health page".
Current 23andMe customers who received health-related results prior to November 22, 2013 will continue to have access to that information. However, no new health-related updates will be provided to your account.
Customers who purchased kits before November 22, 2013 will still receive
Customers who purchase or have purchased 23andMe’s Personal Genome Service (PGS) on or after November 22, 2013, (date of compliance letter issued by the FDA) will receive their ancestry information and uninterpreted raw genetic data. At this time, we do not know the timeline as to which health reports might be available in the future or when they might be available.
Customers who purchased kits on or after November 22, 2013 through December 5, 2013 are eligible for a refund. 23andMe has notified all eligible customers by email with refund instructions. If you are eligible and have not received an email, please click here.
Ok, so they are not turning off the results already provided to the older customers. If you fell into the cease-and-desist gap, you don't get your info (boo FDA) but you can get a refund.
In the mean time, 23andme is an ancestry / genealogy company.
I suppose that is it until they pass regulatory approval for their health and trait information?
By way of brief introduction, I last discussed the 23andme genetic screening service in the context of their belated adoption of IRB oversight and interloper paternity rates. You may also be interested in Ed Yong's (or his euro-caucasoid doppelganger's) results.
Today's topic is brought to you by a comment from my closest collaborator on a fascinating low-N developmental biology project.
This collaborator raised a point that extends from my prior comment on the paternity post.
But, and here's the rub, the information propagates. Let's assume there is a mother who knows she had an affair that produced the kid or a father who impregnated someone unknown to his current family. Along comes the 23 and me contact to their child? Grandchild? Niece or nephew? Brother or sister? And some stranger asks them, gee, do you have a relative with these approximate racial characteristics, of approximately such and such age, who was in City or State circa 19blahdeblah? And then this person blast emails their family about it? or posts it on Facebook?
It also connects with a number of issues raised by the fact that 23andme markets to adoptees in search of their genetic relatives. This service is being used by genealogy buffs of all stripes and one can not help but observe that one of the more ethically complicated results will be the identification of unknown genetic relationships. As I alluded to above, interloper paternity may be identified. Also, one may find out that a relative gave a child up for adoption...or that one fathered a child in the past and was never informed.
That's all very interesting but today's topic relates to crimes in which DNA evidence has been left behind. At present, so far as I understand, the DNA matching is to people who have already crossed the law enforcement threshold. In fact there was a recent broughha over just what sort of "crossing" of the law enforcement threshold should permit the cops to take your DNA if I am not mistaken. This does not good, however, if the criminal has never come to the attention of law enforcement.
Ahhhh, but what if the cops could match the DNA sample left behind by the perpetrator to a much larger database. And find a first or second cousin or something? This would tremendously narrow the investigation, wouldn't it?
It looks like 23andme is all set to roll over for whichever enterprising police department decides to try.
From the Terms of Service.
Further, you acknowledge and agree that 23andMe is free to preserve and disclose any and all Personal Information to law enforcement agencies or others if required to do so by law or in the good faith belief that such preservation or disclosure is reasonably necessary to: (a) comply with legal process (such as a judicial proceeding, court order, or government inquiry) or obligations that 23andMe may owe pursuant to ethical and other professional rules, laws, and regulations; (b) enforce the 23andMe TOS; (c) respond to claims that any content violates the rights of third parties; or (d) protect the rights, property, or personal safety of 23andMe, its employees, its users, its clients, and the public.
Looks to me that all the cops would need is a warrant. Easy peasy.
As you know, the Boundary Layer blog and citizen-journalist Comradde PhysioProffe have been laying out the case for why institutionally unaffiliated, crowd funded ostensibly open science projects should be careful to adhere to traditional, boring, institutionally hidebound "red tape" procedures when it comes to assuring the ethical use of human subjects in their research.
I raised the parallel case of 23andme at the get go and was mollified by a comment from bsci that 23andme has IRB oversight for their operation. Turns out, they too were brought to this by the peer review process and not by any inherent professionalism or appreciation on the part of the company participants.
The first issue that attracted our attention was that the initial submission lacked a document indicating that the study had passed review by an institutional review board (IRB). The authors responded by submitting a report, obtained after the initial round of review, from the Association for the Accreditation of Human Research Protection Programs (AAHRPP)–accredited company Independent Review Consulting, Inc. (IRC: San Anselmo, CA), exempting them from review on the basis that their activity is “not human subjects research.” On the face of it, this seems preposterous, but on further review, this decision follows not uncommon practices by most scientists and institutional review boards, both academic and commercial, and is based on a guidance statement from the United States Department of Health and Human Services' Office of Human Research Protection (http://www.hhs.gov/ohrp/humansubjects/guidance/cdebiol.htm). Specifically (and as documented in part C2 of the IRC report), there are two criteria that must be met in order to determine that a study involves human subjects research: will the investigators obtain the data through intervention or interaction with the participants, and will the identity of the subject be readily ascertained by the investigator or associated with the information. For the 23andMe study, the answer to both tests was “no,” ostensibly because there was never any interpersonal contact between investigator and participant (that is, data and samples are provided without participants meeting any investigator), and the participant names are anonymous with respect to the data seen by the investigators. It follows from the logic of the IRC review, in accordance with the OHRP guidance documents, that this study does not involve human subjects research.
The journal should never have accepted this article for publication. I find no mention of ethics regarding the use of human or nonhuman vertebrate animals on their guidelines for authors page but it is over here on their Policies page.
Research involving human participants. All research involving human participants must have been approved by the authors' institutional review board or equivalent committee(s), and that board must be named in the manuscript. For research involving human participants, informed consent must have been obtained (or the reason for lack of consent explained — for example, that the data were analyzed anonymously) and all clinical investigation must have been conducted according to the principles expressed in the Declaration of Helsinki. Authors should be able to submit, upon request, a statement from the research ethics committee or institutional review board indicating approval of the research. PLOS editors also encourage authors to submit a sample of a patient consent form, and might require submission on particular occasions.
Obviously, the journal decided to stand on a post-hoc IRB decision that the work in question was not ever "involving human participants" in the first place. This is not acceptable to me.
The reason why is that any reasonable professional involved with anything like this would understand the potential human subjects concern. Once there is that potential than the only possible ethical way forward is to seek external review by an IRB or IRB-like body. [ It has been a while since I kicked up a stink about "silly little internet polls" back in the Sb days. For those new to the blog, I went so far as to get a ruling from my IRB (informal true, but I retain the email) on the polls that I might put up.] Obviously, the 23andme folks were able to do so......after the journal made them. So there is no reason they could not have done so at the start. They overlooked their professional responsibility. Getting permission after the fact is simply not the way things work.
Imagine if in animal subjects research we were to just go ahead and do whatever we wanted and only at the point of publishing the paper try to obtain approval for only those data that we chose to include in that manuscript. Are you kidding me?
Ethical review processes are not there only to certify each paper. They are there to keep the entire enterprise of research using human or nonhuman vertebrate animals as ethical, humane, responsible etc as is possible.
This is why hairsplitting about "controlling legal authority" when it comes to academic professionals really angers me. We work within these ethical "constraints" ("red tape" as some wag on the Twitts put it) for good reasons and we should fully accept and adopt them. Not put up with them grudgingly, as an irritation, and look for every possible avenue to get ourselves out from under them. We don't leave our professionalism behind when we leave the confines of our University. Ever. We leave it behind when we leave our profession (and some might even suggest our common-decency-humanity) behind.
Somehow I don't think these crowdfunders claim to be doing that.
Reputable citizen-journalist Comradde PhysioProffe has been investigating the doings of a citizen science project, ubiome. Melissa of The Boundary Layer blog has nicely explicated the concerns about citizen science that uses human subjects.
And this brings me to what I believe to be the potentially dubious ethics of this citizen science project. One of the first questions I ask when I see any scientific project involving collecting data from humans is, “What institutional review board (IRB) is monitoring this project?” An IRB is a group that is specifically charged with protecting the rights of human research participants. The legal framework that dictates the necessary use of an IRB for any project receiving federal funding or affiliated with an investigational new drug application stems from the major abuses perpetrated by Nazi physicians during Word War II and scientists and physicians affiliated with the Tuskegee experiments. The work that I have conducted while affiliated with universities and with pharmaceutical companies has all been overseen by an IRB. I will certainly concede to all of you that the IRB process is not perfect, but I do believe that it is a necessary and largely beneficial process.
My immediate thought was about those citizen scientist, crowd-funded projects that might happen to want to work with vertebrate animals.
I wonder how this would be received:
“We’ve given extensive thought to our use of stray cats for invasive electrophysiology experiments in our crowd funded garage startup neuroscience lab. We even thought really hard about IACUC approvals and look forward to an open dialog as we move forward with our recordings. Luckily, the cats supply consent when they enter the garage in search of the can of tuna we open every morning at 6am.”
Anyway, in citizen-journalist PhysioProffe's investigations he has linked up with an amazing citizen-IRB-enthusiast. A sample from this latter's recent guest post on the former's
Then in 1972, a scandal erupted over the Tuskegee syphilis experiment. This study, started in 1932 by the US Public Health Service, recruited 600 poor African-American tenant farmers in Macon County, Alabama: 201 of them were healthy and 399 had syphilis, which at the time was incurable. The purpose of the study was to try out treatments on what even the US government admitted to be a powerless, desperate demographic. Neither the men nor their partners were told that they had a terminal STD; instead, the sick men were told they had “bad blood” — a folk term with no basis in science — and that they would get free medical care for themselves and their families, plus burial insurance (i.e., a grave plot, casket and funeral), for helping to find a cure.
When penicillin was discovered, and found in 1947 to be a total cure for syphilis, the focus of the study changed from trying to find a cure to documenting the progress of the disease from its early stages through termination. The men and their partners were not given penicillin, as that would interfere with the new purpose: instead, the government watched them die a slow, horrific death as they developed tumors and the spirochete destroyed their brains and central nervous system. Those who wanted out of the study, or who had heard of this new miracle drug and wanted it, were told that dropping out meant paying back the cost of decades of medical care, a sum that was far beyond anything a sharecropper could come up with.
If you want to understand the child molestation case that has rocked Penn State University in full, you need to read PhysioProf's take on the matter.
Joe Paterno–who has been the head coach for 46 years is the absolute monarch of that program, with absolute power. Regardless of whether he satisfied the bare minimum of legal requirements to report what he knew about the rape of children to his “superiors”–which as absolute monarch at Penn State, he really had none
emphasis added, but not really needed.