Archive for the 'Science Publication' category

The Journal of Neuroscience needs to explain the author ban

I agree with the following Twitter comment

Insofar as it calls for the Editorial Board of the Journal of Neuroscience to explain why it banned three authors from future submissions. As I said on the prior post, this step is unusual and seems on the face of it to be extreme.

I also said I could see justification for the decision to retract the paper. I say that also could stand some explanation, given the public defense and local University review decision.

5 responses so far

Banning authors once a paper is retracted from the journal?

Mar 02 2015 Published by under Ethics, Science Ethics, Science Publication

A post at Retraction Watch alerts us to to a paper retraction at the Journal of Neuroscience. The J Neuro notice on this paper reads:

The Journal of Neuroscience has received notification of an investigation by the Perelman School of Medicine at the University of Pennsylvania, which supports the journal's findings of data misrepresentation in the article “Intraneuronal APP, Not Free Aβ Peptides in 3xTg-AD Mice: Implications for Tau Versus Aβ-Mediated Alzheimer Neurodegeneration” by Matthew J. Winton, Edward B. Lee, Eveline Sun, Margaret M. Wong, Susan Leight, Bin Zhang, John Q. Trojanowski, and Virginia M.-Y. Lee, which appeared on pages 7691–7699 of the May 25, 2011 issue. Because the results cannot be considered reliable, the editors of The Journal are retracting the paper.

From RetractionWatch we learn that the Journal has also issued a submission ban to three of the authors:

According to author John Trojanowski ... he and Lee have been barred from publishing in Journal for Neuroscience for several years. Senior author Edward Lee is out for a year.

This is the first time I have ever heard of a Journal issuing a ban on authors submitting papers to them. This is an interesting policy.

If this were a case of a conviction for academic fraud, the issues might be a little clearer. But as it turns out, it is a very muddy case indeed.

A quote from the last author:

In a nut shell, Dean Glen Gaulton asserted that the findings in the paper were correct despite mistakes in the figures. I suggested to J. Neuroscience that we publish a corrigendum to clarify these mistakes for the readership of J Neuroscience

The old "mistaken figures" excuse. Who, might we ask is at fault?

RetractionWatch quotes the second-senior author Trojanowski:

Last April, we got an email about an inquiry into figures that I would call erroneously used. An error was made by [first author] Matt Winton, who was leaving science and in transition between Penn and his new job. He was assembling the paper to submit it, there were several iterations of the paper. One set of figures was completely correct – I still don’t know what happened, but he got the files mixed up, and used erroneous figures

Winton has apparently landed a job as a market analyst*, providing advice to investors on therapeutics for Alzheimer's Disease. Maybe the comment from Trojanowski is true and he was in a rush to get the paper off his desk as he started the new job**. Maybe. Maybe there is all kinds of blame to go around and the other authors should have caught the problem.

Or maybe this was one of those deliberate frauds in which someone took shortcuts and represented immunohistochemical images or immunoblots as something they were not. The finding from the University's own investigation appears to confirm, however, that a legitimate mistake was made. let us assume it was all an accident. Should the paper be retracted? or corrected?

I think there are two issues here that support the Journal's right to retract the paper.

We cannot ignore that publication of a finding first has tremendous currency in the world of academic publishing. So does the cachet of publishing in one Journal over another. If a set of authors are sloppy about their manuscript preparation, provide erroneous data figures and they are permitted to "correct" the figures, they gain essentially all the credit. Potentially taking credit for priority or a given Journal level away from another group that works more carefully.

Since we would like authors to take all the care they possibly can in submitting correct data in the first place, it makes some sense to take steps to discourage sloppiness. Retraction is certainly one such discouragement. A ban on future submissions does seem, on the face of it, a bit harsh for a single isolated error. I might not opt for that if it were my decision. But I can certainly see where another scientist might legitimately want to bring down the ban hammer and I would be open to argument that it is necessary.

The second issue I can think of is related. It has to do with whether the paper acceptance was unfairly won by the "mistake". This is tricky. I have seen many cases in which even to the relatively uninformed viewer, the replacement/correct figure looks a lot crappier/dirtier/equivocal than the original mistaken image. Whether right or wrong that so-called "pretty" data change the correctness of the interpretation and strength of the support, it is often interpreted this way. This raises the question of whether the paper would have gained acceptance with the real data instead of the supposedly mistaken data. We obviously can't rewind history, but this theoretical concern should be easy to appreciate. Maybe the Journal of Neuroscience review board went through all of the review materials for this paper and decided that the faked figure sealed the acceptance? For this concern it really makes no difference to the Journal whether the mistake was unintentional or not, there is a strong argument that the integrity of its process requires retraction whenever there is significant doubt the paper would have been accepted without the mistaken image(s).

Given these two issues, I see no reason that the Journal is obligated to "abide by the Penn committee’s investigation" as Trojanowski appears to think they should be. The Journal could accept that it was all just a mistake and still have good reason to retract the paper. But again, a ban on further submissions from the authors seems a bit harsh.

Now, I will point out one thing in this scenario that chaps my hide. It is a frequent excuse of the convicted data faker that they were right, so all is well. RetractionWatch further quotes the senior author, Lee:

...the findings of this paper are extremely important for the Alzheimer’s disease field because it provided convincing evidence pointing out that a previous report claiming accumulation of intracellular Abeta peptide in a mouse model (3XFAD) is wrong (Oddo et al., Neuron 2003), as evidenced by the fact that this paper has been cited by others for 62 times since publication. Subsequent to our 2011 J. Neuroscience paper, others also have found no evidence of intracellular Abeta in the 3XFAD mice (e.g. Lauritzen et al., J. Neurosci, 2012).

I disagree that whether the figures are correct and/or repeatable is an issue that affects the decision here. You either have the correct data or you do not. You either submitted the correct data for review with the manuscript or you did not. Whether you are able to obtain the right data later, whether other labs obtain the right data or whether you had the right data in a mislabeled file all along is absolutely immaterial to whether the paper should be retracted.

The system itself is what needs to be defended. Because if you don't protect the integrity of the peer review system - where authors are presumed to be honest - then it encourages more sloppiness and more outright fraud.

*An interesting alt-career folks. One of my old grad school peeps has been in this industry for years and appears to really love it.

**I will admit, my eyebrows go up when the person being thrown under the bus for a mistake or a data fraud is someone who is no longer in the academic science publishing game and has very little to lose compared with the other authors.

19 responses so far

Nature is not at all serious about double blind peer review

Feb 25 2015 Published by under Science Publication, Scientific Publication

The announcement for the policy is here.

Before I get into this, it would be a good thing if the review of scientific manuscripts could be entirely blind. Meaning the authors do not know who is editing or reviewing their paper- the latter is almost always true already - and that the editors and reviewers do not know who the authors are.

The reason is simple. Acceptance of a manuscript for publication should be entirely on the basis of what is contained in that manuscript. It should rely in no way on the identity of the people submitting the manuscript. This is not true at present. The reputation and/or perceived power of the authors is hugely influential on what gets published in which journals. Particularly for what are perceived as the best or most elite journals. This is a fact.

The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted. Even more worrisome, science that is significantly flawed or wrong may get published because of author reputation when it would have otherwise been sent back for fixing of the flaws.

We should all be most interested in making science publication as excellent as possible.

Blinding of the peer review process is a decent way to minimize biases based on author identity, so it is a good thing.

My problem is that it cannot work, absent significant changes in the way academic publishing operates. Consequently, any attempts to conduct double-blinded review that does not address these significant issues is doomed to fail. And since anyone with half a brain can see the following concerns, if they argue this Nature initiative is a good idea then I submit to you that they are engaged in a highly cynical effort to direct attention away from certain things. Things that we might describe as the real problem.

Here are the issues I see with the proposed Nature experiment.
1) It doesn't blind their editors. Nature uses a professional editorial staff who decide whether to send a manuscript out for peer review or just to summarily reject it. They select reviewers, make interim decisions, decide whether to send subsequent revised versions to review, select new or old reviewers and decide, again, whether to accept the manuscript. These editors, being human, are subject to tremendous biases based on author identity. Their role in the process is so tremendously powerful that blinding the reviewers but not the editors to the author identity is likely to have only minimal effect.

2) This policy is opt-in. HA! This is absurd. The people who are powerful and thus expected to benefit from their identity will not opt in. They'd be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so.

3) The scientific manuscript as we currently know it is chock full of clues as to author identity. Even if you rigorously excluded "we previously reported..." statements and manged to even out the self-citations to a nonsuspicious level (no easy task on either account) there is still the issue of scientific interest. No matter what the topic, there is going to be a betting gradient for how likely different labs are to have produced the manuscript.

4) The Nature policy mentions no back checking on whether their blinding actually works. This is key, see above comment about the betting gradient. It is not sufficient to put formal de-identification in place. It is necessary to check with reviewers over the real practice of the policy to determine the extent to which blinding succeeds or fails. And you cannot simply brandish a less than 100% identification rate either. If the reviewer only thinks that the paper was written by Professor Smith, then the system is already lost. Because that reviewer is being affected by the aforementioned issues of reputation and power even if she is wrong about the authors. That's on the tactical, paper by paper front. In the longer haul, the more reputed labs are generally going to be more actively submitting to a given journal and thus the erroneous assumption will be more likely to accrue to them anyway.

So. We're left with a policy that can be put in place in a formal sense. Nature can claim that they have conducted "double blind" review of manuscripts.

They will not be able to show that review is truly blinded. More critically they will not able to show that author reputational bias has been significantly decoupled from the entire process, given the huge input from their editorial staff.

So anything that they conclude from this will be baseless. And therefore highly counterproductive to the overall mission.

47 responses so far

A lesson on Supplemental Materials

Feb 24 2015 Published by under Science Publication, Scientific Publication

Good! That's my response. It is fantastic if someone can publish a paper on stuff that was essentially hidden in the Supplementary Materials of some other paper. 

This is great news. 

23 responses so far

Nature publishes overwhelmingly proven "NEW AMAZING FINDING" ....because optogenetics!

The PR headlines are breathless and consistent:

Researchers Identify Brain Circuit That Regulates Thirst

Brain’s On-Off Thirst Switch Identified

The paper is here.

Yuki Oka, Mingyu Ye & Charles S. Zuker Thirst driving and suppressing signals encoded by distinct neural populations in the brain Nature (2015) doi:10.1038/nature14108

The takeaway punch message from the Abstract:

These results reveal an innate brain circuit that can turn an animal’s water-drinking behaviour on and off, and probably functions as a centre for thirst control in the
mammalian brain.

Somebody like me immediately thinks to himself "subfornical neurons control drinking behavior? This is like the fifth lecture in Psych 105: Introduction to Physiological Psychology."

Let's do a little PubMed troll for "subfornical drinking". Yeah, we've known since at least the 1970s that the subfornical control of drinking behavior is essential, robust and mediated by angiotensin II signalling. We know how this area responds to blood volemia and natremia and how the positioning relative to the third ventricle and the function of the circumventricular organ vis a vis the blood-brain barrier permits this rapid-response. We know the signalling works through AT1 receptor subtype to excite subfornical neuronal activity via electrophysiological recording techniques and genetic deletions. Cholinergic mechanisms have likewise been identified as critical components via pharmacological experiments. Mapping of activated neurons has been used to identify related circuitry. The targets of subfornical neurons are known and their involvement in drinking behavior has likewise been characterized. Extensively. We know that electrical stimulation of these neuronal populations activates drinking in water sated rats, for goodness sake! We know there are at least three subpopulations of SFO neurons involved and something about the neurochemical signalling complexity.

There are review articles that you can read if you want to get up to speed.

The new work by Oka and colleagues simply repeats the above-mentioned electro-stimulation experiment from 1983 using optogenetic stimulation. Apart from this, maybe, we have an advance* in that they identified ETV-1 vs VGAT (GABA transporter) markers of two distinct subpopulations of neurons which have opposite effects on the motivation to consume water.

That's it.

This paper is best described as a very small, incremental advance in understanding of thirst and drinking behavior, albeit tarted up with the pizzaz of optogenetic techniques.

Yet it was published in Nature.

Someone really needs to introduce the editorial staff of Nature to PubMed.
*BTW, a Nature editor confirms this microscopic incremental advance is what is new about this paper.

65 responses so far

Ideal Primary Data:Review Article Ratio

Jan 26 2015 Published by under Careerism, Science Publication

Odyssey is pondering review articles today. That led to a question from Dr. Becca about the ideal ratio of reviews and primary research articles.

I am not a fan of authors publishing essentially the same review in multiple journals. Nor am I a fan of the incrementally updated review published every year or two. And I am really not fond of burgeoning subfields where everyone spits out a me-too review which then outnumber the primary research articles!

So, my views on this question are likely more negative than average.

19 responses so far

Thought of the Day

On the resetting of the date of original submission:

One thing it does is keep a lid on people submitting a priority place holder before the study is even half done. I could see this as a positive step. Anything to undermine scooping culture in science is good by me.

One response so far

Are your journals permitting only one "major revision" round?

Skeptic noted the following on a prior post:

First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor's word, "he has to reject the paper because this was the revision."

This echoes something I have only recently heard about from a peer. Namely that a journal editor said that a manuscript was being rejected due to* it being policy not to permit multiple rounds of revision after a "major revisions" decision.

The implications are curious. I have not yet ever been told by a journal editor that this is their policy when I have been asked to review a manuscript.

I will, now and again, give a second recommendation for Major Revisions if I feel like the authors are not really taking my points to heart after the first round. I may even switch from Minor Revisions to Major Revisions in such a case.

Obviously, since I didn't select the "Reject" option in these cases, I didn't make my review thinking that my recommendation was in fact a "Reject" instead of the "Major Revisions".

I am bothered by this. It seems that journals are probably adopting these policies because they can, i.e., they get far more submissions than they can print. So one way to go about triaging the avalanche is to assume that manuscripts that require more than one round of fighting over revisions can be readily discarded. But this ignores the intent of the peer reviewer to large extent.

Well, now that I know this about two journals for which I review, I will adjust my behavior accordingly. I will understand that a recommendation of "Major Revisions" on the revised version of the manuscript will be interpreted by the Editor as "Reject" and I will supply the recommendation that I intend.

Is anyone else hearing these policies from journals in their fields?
*having been around the block a time or two I hypothesize that, whether stated or not, those priority ratings that peer reviewers are asked to supply have something to do with these decisions as well. The authors generally only see the comments and may have no idea that that "favorable" reviewer who didn't find much of fault with the manuscript gave them a big old "booooooring" on the priority rating.

47 responses so far

Is the J Neuro policy banning Supplemental Materials backfiring?

As you will recall, I was very happy when the Journal of Neuroscience decided to ban the inclusion of any Supplemental Materials in articles considered for publication. That move took place back in 2010.

Dr. Becca, however, made the following observation on a recent post:

I'm done submitting to J Neuro. The combination of endless experiment requests due to unlimited space and no supp info,

I find that to be a fascinating comment. It suggests that perhaps the J Neuro policy has been ineffectual, or even has backfired.

To be honest, I can't recall that I have noticed anything in a J Neuro article that I've read in the past few years that reminded me of this policy shift one way or the other.

How about you, Dear Reader? Noticed any changes that appear to be related to this banning of Supplemental Materials?

For that matter, has the banning of Supplemental Materials altered your perception of the science that is published in that journal?

44 responses so far


Occasionally you notice one of your colleagues pulling off something you hope for within your own group.

When your manuscript gets rejected from one journal you would typically submit it to an approximately equal* journal next, hoping to get a more favorable mix of AE and reviewers.

If you've worked up more data that could conceivable fit with the rejected set, maybe you would submit upward, trying a journal with a better reputation.

What is slightly less-usual is taking the same manuscript, essentially unrevised and submitting it to a journal of better reputation or JIF or whathave you.

Getting that self-same manuscript accepted, essentially unchanged, is a big win.

Chapaeau, my friends.

*You do this, right?

13 responses so far

Older posts »