Archive for the 'Science Publication' category
Scientific publishers being told they can't keep fleecing the taxpayer so badly are basically Cliven Bundy. Discuss.
— Tom Reller, Elsevier (@TomReller) April 23, 2015
Two years after your paper is published in Journal of SocietyB send the citation report showing that it quadrupled the JIF of the JournalA that rejected it to the rejecting Editor.
Let's make this a thing, people.
— FASEB Public Affairs (@FASEBopa) March 23, 2015
The offending policy.
Of course, unless they get to the bottom of who asserted that reviewers and AEs who suggest experiments should be added to the author line we have to assume the attitude remains.
this has been bopping around on the Twitts lately..
Shit --- - F. Cesari / Nature pic.twitter.com/8vlpksZZHc
— Sarah (@Drosophilista) March 10, 2015
I agree with the following Twitter comment
— Michael Hendricks (@MHendr1cks) March 3, 2015
Insofar as it calls for the Editorial Board of the Journal of Neuroscience to explain why it banned three authors from future submissions. As I said on the prior post, this step is unusual and seems on the face of it to be extreme.
I also said I could see justification for the decision to retract the paper. I say that also could stand some explanation, given the public defense and local University review decision.
The Journal of Neuroscience has received notification of an investigation by the Perelman School of Medicine at the University of Pennsylvania, which supports the journal's findings of data misrepresentation in the article “Intraneuronal APP, Not Free Aβ Peptides in 3xTg-AD Mice: Implications for Tau Versus Aβ-Mediated Alzheimer Neurodegeneration” by Matthew J. Winton, Edward B. Lee, Eveline Sun, Margaret M. Wong, Susan Leight, Bin Zhang, John Q. Trojanowski, and Virginia M.-Y. Lee, which appeared on pages 7691–7699 of the May 25, 2011 issue. Because the results cannot be considered reliable, the editors of The Journal are retracting the paper.
From RetractionWatch we learn that the Journal has also issued a submission ban to three of the authors:
According to author John Trojanowski ... he and Lee have been barred from publishing in Journal for Neuroscience for several years. Senior author Edward Lee is out for a year.
This is the first time I have ever heard of a Journal issuing a ban on authors submitting papers to them. This is an interesting policy.
If this were a case of a conviction for academic fraud, the issues might be a little clearer. But as it turns out, it is a very muddy case indeed.
A quote from the last author:
In a nut shell, Dean Glen Gaulton asserted that the findings in the paper were correct despite mistakes in the figures. I suggested to J. Neuroscience that we publish a corrigendum to clarify these mistakes for the readership of J Neuroscience
The old "mistaken figures" excuse. Who, might we ask is at fault?
RetractionWatch quotes the second-senior author Trojanowski:
Last April, we got an email about an inquiry into figures that I would call erroneously used. An error was made by [first author] Matt Winton, who was leaving science and in transition between Penn and his new job. He was assembling the paper to submit it, there were several iterations of the paper. One set of figures was completely correct – I still don’t know what happened, but he got the files mixed up, and used erroneous figures
Winton has apparently landed a job as a market analyst*, providing advice to investors on therapeutics for Alzheimer's Disease. Maybe the comment from Trojanowski is true and he was in a rush to get the paper off his desk as he started the new job**. Maybe. Maybe there is all kinds of blame to go around and the other authors should have caught the problem.
Or maybe this was one of those deliberate frauds in which someone took shortcuts and represented immunohistochemical images or immunoblots as something they were not. The finding from the University's own investigation appears to confirm, however, that a legitimate mistake was made.
...so let us assume it was all an accident. Should the paper be retracted? or corrected?
I think there are two issues here that support the Journal's right to retract the paper.
We cannot ignore that publication of a finding first has tremendous currency in the world of academic publishing. So does the cachet of publishing in one Journal over another. If a set of authors are sloppy about their manuscript preparation, provide erroneous data figures and they are permitted to "correct" the figures, they gain essentially all the credit. Potentially taking credit for priority or a given Journal level away from another group that works more carefully.
Since we would like authors to take all the care they possibly can in submitting correct data in the first place, it makes some sense to take steps to discourage sloppiness. Retraction is certainly one such discouragement. A ban on future submissions does seem, on the face of it, a bit harsh for a single isolated error. I might not opt for that if it were my decision. But I can certainly see where another scientist might legitimately want to bring down the ban hammer and I would be open to argument that it is necessary.
The second issue I can think of is related. It has to do with whether the paper acceptance was unfairly won by the "mistake". This is tricky. I have seen many cases in which even to the relatively uninformed viewer, the replacement/correct figure looks a lot crappier/dirtier/equivocal than the original mistaken image. Whether right or wrong that so-called "pretty" data change the correctness of the interpretation and strength of the support, it is often interpreted this way. This raises the question of whether the paper would have gained acceptance with the real data instead of the supposedly mistaken data. We obviously can't rewind history, but this theoretical concern should be easy to appreciate. Maybe the Journal of Neuroscience review board went through all of the review materials for this paper and decided that the faked figure sealed the acceptance? For this concern it really makes no difference to the Journal whether the mistake was unintentional or not, there is a strong argument that the integrity of its process requires retraction whenever there is significant doubt the paper would have been accepted without the mistaken image(s).
Given these two issues, I see no reason that the Journal is obligated to "abide by the Penn committee’s investigation" as Trojanowski appears to think they should be. The Journal could accept that it was all just a mistake and still have good reason to retract the paper. But again, a ban on further submissions from the authors seems a bit harsh.
Now, I will point out one thing in this scenario that chaps my hide. It is a frequent excuse of the convicted data faker that they were right, so all is well. RetractionWatch further quotes the senior author, Lee:
...the findings of this paper are extremely important for the Alzheimer’s disease field because it provided convincing evidence pointing out that a previous report claiming accumulation of intracellular Abeta peptide in a mouse model (3XFAD) is wrong (Oddo et al., Neuron 2003), as evidenced by the fact that this paper has been cited by others for 62 times since publication. Subsequent to our 2011 J. Neuroscience paper, others also have found no evidence of intracellular Abeta in the 3XFAD mice (e.g. Lauritzen et al., J. Neurosci, 2012).
I disagree that whether the figures are correct and/or repeatable is an issue that affects the decision here. You either have the correct data or you do not. You either submitted the correct data for review with the manuscript or you did not. Whether you are able to obtain the right data later, whether other labs obtain the right data or whether you had the right data in a mislabeled file all along is absolutely immaterial to whether the paper should be retracted.
The system itself is what needs to be defended. Because if you don't protect the integrity of the peer review system - where authors are presumed to be honest - then it encourages more sloppiness and more outright fraud.
*An interesting alt-career folks. One of my old grad school peeps has been in this industry for years and appears to really love it.
**I will admit, my eyebrows go up when the person being thrown under the bus for a mistake or a data fraud is someone who is no longer in the academic science publishing game and has very little to lose compared with the other authors.
The announcement for the policy is here.
Before I get into this, it would be a good thing if the review of scientific manuscripts could be entirely blind. Meaning the authors do not know who is editing or reviewing their paper- the latter is almost always true already - and that the editors and reviewers do not know who the authors are.
The reason is simple. Acceptance of a manuscript for publication should be entirely on the basis of what is contained in that manuscript. It should rely in no way on the identity of the people submitting the manuscript. This is not true at present. The reputation and/or perceived power of the authors is hugely influential on what gets published in which journals. Particularly for what are perceived as the best or most elite journals. This is a fact.
The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted. Even more worrisome, science that is significantly flawed or wrong may get published because of author reputation when it would have otherwise been sent back for fixing of the flaws.
We should all be most interested in making science publication as excellent as possible.
Blinding of the peer review process is a decent way to minimize biases based on author identity, so it is a good thing.
My problem is that it cannot work, absent significant changes in the way academic publishing operates. Consequently, any attempts to conduct double-blinded review that does not address these significant issues is doomed to fail. And since anyone with half a brain can see the following concerns, if they argue this Nature initiative is a good idea then I submit to you that they are engaged in a highly cynical effort to direct attention away from certain things. Things that we might describe as the real problem.
Here are the issues I see with the proposed Nature experiment.
1) It doesn't blind their editors. Nature uses a professional editorial staff who decide whether to send a manuscript out for peer review or just to summarily reject it. They select reviewers, make interim decisions, decide whether to send subsequent revised versions to review, select new or old reviewers and decide, again, whether to accept the manuscript. These editors, being human, are subject to tremendous biases based on author identity. Their role in the process is so tremendously powerful that blinding the reviewers but not the editors to the author identity is likely to have only minimal effect.
2) This policy is opt-in. HA! This is absurd. The people who are powerful and thus expected to benefit from their identity will not opt in. They'd be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so.
3) The scientific manuscript as we currently know it is chock full of clues as to author identity. Even if you rigorously excluded "we previously reported..." statements and manged to even out the self-citations to a nonsuspicious level (no easy task on either account) there is still the issue of scientific interest. No matter what the topic, there is going to be a betting gradient for how likely different labs are to have produced the manuscript.
4) The Nature policy mentions no back checking on whether their blinding actually works. This is key, see above comment about the betting gradient. It is not sufficient to put formal de-identification in place. It is necessary to check with reviewers over the real practice of the policy to determine the extent to which blinding succeeds or fails. And you cannot simply brandish a less than 100% identification rate either. If the reviewer only thinks that the paper was written by Professor Smith, then the system is already lost. Because that reviewer is being affected by the aforementioned issues of reputation and power even if she is wrong about the authors. That's on the tactical, paper by paper front. In the longer haul, the more reputed labs are generally going to be more actively submitting to a given journal and thus the erroneous assumption will be more likely to accrue to them anyway.
So. We're left with a policy that can be put in place in a formal sense. Nature can claim that they have conducted "double blind" review of manuscripts.
They will not be able to show that review is truly blinded. More critically they will not able to show that author reputational bias has been significantly decoupled from the entire process, given the huge input from their editorial staff.
So anything that they conclude from this will be baseless. And therefore highly counterproductive to the overall mission.
What happens when you bury data in supplemental figures b/c reviewers ask for it? Another lab publishes a primary paper on same topic.
— chemstructbio (@chemstructbio) February 24, 2015
Good! That's my response. It is fantastic if someone can publish a paper on stuff that was essentially hidden in the Supplementary Materials of some other paper.
This is great news.
The PR headlines are breathless and consistent:
The paper is here.
Yuki Oka, Mingyu Ye & Charles S. Zuker Thirst driving and suppressing signals encoded by distinct neural populations in the brain Nature (2015) doi:10.1038/nature14108
The takeaway punch message from the Abstract:
These results reveal an innate brain circuit that can turn an animal’s water-drinking behaviour on and off, and probably functions as a centre for thirst control in the
Somebody like me immediately thinks to himself "subfornical neurons control drinking behavior? This is like the fifth lecture in Psych 105: Introduction to Physiological Psychology."
Let's do a little PubMed troll for "subfornical drinking". Yeah, we've known since at least the 1970s that the subfornical control of drinking behavior is essential, robust and mediated by angiotensin II signalling. We know how this area responds to blood volemia and natremia and how the positioning relative to the third ventricle and the function of the circumventricular organ vis a vis the blood-brain barrier permits this rapid-response. We know the signalling works through AT1 receptor subtype to excite subfornical neuronal activity via electrophysiological recording techniques and genetic deletions. Cholinergic mechanisms have likewise been identified as critical components via pharmacological experiments. Mapping of activated neurons has been used to identify related circuitry. The targets of subfornical neurons are known and their involvement in drinking behavior has likewise been characterized. Extensively. We know that electrical stimulation of these neuronal populations activates drinking in water sated rats, for goodness sake! We know there are at least three subpopulations of SFO neurons involved and something about the neurochemical signalling complexity.
The new work by Oka and colleagues simply repeats the above-mentioned electro-stimulation experiment from 1983 using optogenetic stimulation. Apart from this, maybe, we have an advance* in that they identified ETV-1 vs VGAT (GABA transporter) markers of two distinct subpopulations of neurons which have opposite effects on the motivation to consume water.
This paper is best described as a very small, incremental advance in understanding of thirst and drinking behavior, albeit tarted up with the pizzaz of optogenetic techniques.
Yet it was published in Nature.
Someone really needs to introduce the editorial staff of Nature to PubMed.
*BTW, a Nature editor confirms this microscopic incremental advance is what is new about this paper.