Archive for the 'Scientific Publication' category

Self plagiarism

A journal has recently retracted an article for self-plagiarism:

Just going by the titles this may appear to be the case where review or theory material is published over and over in multiple venues.

I may have complained on the blog once or twice about people in my fields of interest that publish review after thinly updated review year after year.

I've seen one or two people use this strategy, in addition to a high rate of primary research articles, to blanket the world with their theoretical orientations.

I've seen a small cottage industry do the "more reviews than data articles" strategy for decades in an attempt to budge the needle on a therapeutic modality that shows promise but lacks full financial support from, eg NIH.

I still don't believe "self-plagiarism" is a thing. To me plagiarism is stealing someone else's ideas or work and passing them off as one's own. When art critics see themes from prior work being perfected or included or echoed in the masterpiece, do they scream "plagiarism"? No. But if someone else does it, that is viewed as copying. And lesser. I see academic theoretical and even interpretive work in this vein*.

To my mind the publishing industry has a financial interest in this conflation because they are interested in novel contributions that will presumably garner attention and citations. Work that is duplicative may be seen as lesser because it divides up citation to the core ideas across multiple reviews. Given how the scientific publishing industry leeches off content providers, my sympathies are.....limited.

The complaint from within the house of science, I suspect, derives from a position of publishing fairness? That some dude shouldn't benefit from constantly recycling the same arguments over and over? I'm sort of sympathetic to this.

But I think it is a mistake to give in to the slippery slope of letting the publishing industry establish this concept of "self-plagiarism". The risk for normal science pubs that repeat methods are too high. The risks for "replication crisis" solutions are too high- after all, a substantial replication study would require duplicative Introductory and interpretive comment, would it not?

__

*although "copying" is perhaps unfair and inaccurate when it comes to the incremental building of scientific knowledge as a collaborative endeavor.

8 responses so far

Citing Preprints

In my career I have cited many non-peer-reviewed sources within my academic papers. Off the top of my head this has included:

  1. Government reports
  2. NGO reports
  3. Longitudinal studies
  4. Newspaper items
  5. Magazine articles
  6. Television programs
  7. Personal communications

I am aware of at least one journal that suggests that "personal communications" should be formatted in the reference list just like any other reference, instead of the usual parenthetical comment.

It is much, much less common now but it was not that long ago that I would run into a citation of a meeting abstract with some frequency.

The entire point of citation in a scientific paper is to guide the reader to an item from which they can draw their own conclusions and satisfy their own curiosity. One expects, without having to spell it out each and every time, that a citation of a show on ABC has a certain quality to it that is readily interpreted by the reader. Interpreted as different from a primary research report or a news item in the Washington Post.

Many fellow scientists also make a big deal out of their ability to suss out the quality of primary research reports merely by the place in which it was published. Maybe even by the lab that published it.

And yet.

Despite all of this, I have seen more than one reviewer objection to citing a preprint item that has been published in bioRxiv.

As if it is somehow misleading the reader.

How can all these above mentioned things be true, be such an expectation of reader engagement that we barely even mention it but whooooOOOAAAA!

All of a sudden the citation of a preprint is somehow unbelievably confusing to the reader and shouldn't be allowed.

I really love* the illogical minds of scientists at times.

26 responses so far

Time to N-up!

May 02 2018 Published by under Science Publication, Scientific Publication

Chatter on the Twitts today brought my attention to a paper by Weber and colleagues that had a rather startlingly honest admission.

Weber F, Hoang Do JP, Chung S, Beier KT, Bikov M, Saffari Doost M, Dan Y.Regulation of REM and Non-REM Sleep by Periaqueductal GABAergic Neurons. Nat Commun. 2018 Jan 24;9(1):354. doi: 10.1038/s41467-017-02765-w.

If you page all the way down to the end of the Methods of this paper, you will find a statement on sample size determination. I took a brief stab at trying to find the author guidelines for Nature Communications because a standalone statement of how sample size was arrived upon is somewhat unusual to me. Not that I object, I just don't find this to be common in the journal articles that I read. I was unable to locate it quickly so..moving along to the main point of the day. The statement reads partially:

Sample sizes

For optogenetic activation experiments, cell-type-specific ablation experiments, and in vivo recordings (optrode recordings and calcium imaging), we continuously increased the number of animals until statistical significance was reached to support our conclusions.

Wow. WOW!

This flies in the face of everything I have ever understood about proper research design. In the ResearchDesign 101 approach, you determine* your ideal sample size in advance. You collect your data in essentially one go and then you conduct your analysis. You then draw your conclusions about whether the collected data support, or fail to support, rejection of a null hypothesis. This can then allow you to infer things about the hypothesis that is under investigation.

In the real world, we modify this a bit. And what I am musing today is why some of the ways that we stray from ResearchDesign orthodoxy are okay and some are not.

We talk colloquially about finding support for (or against) the hypothesis under investigation. We then proceed to discuss the results in terms of whether they tend to support a given interpretation of the state of the world or a different interpretation. We draw our conclusions from the available evidence- from our study and from related prior work. We are not, I would argue, supposed to be setting out to find the data that "support our conclusions" as mentioned above. It's a small thing and may simply reflect poor expression of the idea. Or it could be an accurate reflection that these authors really set out to do experiments until the right support for a priori conclusions has been obtained. This, you will recognize, is my central problem with people who say that they "storyboard" their papers. It sounds like a recipe for seeking support, rather than drawing conclusions. This way lies data fakery and fraud.

We also, importantly, make the best of partially successful experiments. We may conclude that there was such a technical flaw in the conduct of the experiment that it is not a good test of the null hypothesis. And essentially treat it in the Discussion section as inconclusive rather than a good test of the null hypothesis.

One of those technical flaws may be the failure to collect the ideal sample size, again as determined in advance*. So what do we do?

So one approach is simply to repeat the experiment correctly. To scrap all the prior data, put fixes in place to address the reasons for the technical failure, and run the experiment again. Even if the technical failure hit only a part of the experiment. If it affected only some of the "in vivo recordings", for example. Orthodox design mavens may say it is only kosher to re run the whole shebang.

In the real world, we often have scenarios where we attempt to replace the flawed data and combine it with the good data to achieve our target sample size. This appears to be more or less the space in which this paper is operating.

"N-up". Adding more replicates (cells, subjects, what have you) until you reach the desired target. Now, I would argue that re-running the experiment with the goal of reaching the target N that you determined in advance* is not that bad. It's the target. It's the goal of the experiment. Who cares if you messed up half of them every time you tried to run the experiment? Where "messed up" is some sort of defined technical failure rather than an outcome you don't like, I rush to emphasize!

On the other hand, if you are spamming out low-replicate "experiments" until one of the scenarios "looks promising", i.e. looks to support your desired conclusions, and selectively "n-up" that particular experiment, well this seems over the line to me. It is much more likely to result in false positives. Well, I suppose running all of these trial experiments at the full power is just as likely it is just that you are not able to do as many trial experiments at full power. So I would argue the sheer number of potential experiments is greater for the low-replicate, n-up-if-promising approach.

These authors appear to have done this strategy even one worse. Because their target is not just an a priori determined sample size to be achieved only when the pilot "looks promising". In this case they take the additional step of only running replicates up to the point where they reach statistical significance. And this seems like an additional way to get an extra helping of false-positive results to me.

Anyway, you can google up information on false positive rates and p-hacking and all that to convince yourself of the math. I was more interested in trying to probe why I got such a visceral feeling that this was not okay. Even if I personally think it is okay to re-run an experiment and combine replicates (subjects in my case) to reach the a priori sample size if it blows up and you have technical failure on half of the data.

__
*I believe the proper manner for determining sample size is entirely apart from the error the authors have admitted to here. This isn't about failing to complete a power analysis or the like.

27 responses so far

Ludicrous academics for $200, Alex

Just when I think I will not find any more ridiculous things hiding in academia.....

A recent thread on twitter addressed a population of academics (not sure if it was science) who are distressed when the peer review of their manuscripts is insufficiently vigorous/critical.

This is totally outside of my experience. I can't imagine ever complaining to an Editor of a journal that the review was too soft after getting an accept or invitation to revise.

People are weird though.

5 responses so far

Question of the Day

How do you assess whether you are too biased about a professional colleague and/or their work?

In the sense that you would self-elect out of reviewing either their manuscripts for publication or their grant applications.

Does your threshold differ for papers versus grants?

Do you distinguish between antipathy bias and sympathy bias?

8 responses so far

Creative artists and the writing of scientific manuscripts

I am a consumer of the creative arts and, really, have always been in awe of creative artists. Looking back chronologically over my lifetime, my greatest consumption and appreciation has been fiction writing, music and cartooning (particularly the political variety). I'm not a big fan of flat art (sculpture speaks to me much more) but I am definitely amazed by what some people can paint, draw and the like. I do like moving picture arts but I don't think I have any particular sense of awe for them as a craft and certainly not for the participants as creative artists*. I get that others can see this, however.

Anyway, the creative artists are amazing to me.

A couple of days ago it occurred to me that understanding the process of creative arts might help cross what I find to be a somewhat frustrating bridge in training other people to write scientific manuscripts.

Sidebar: I am pretty sure we've discussed related topics before on the blog, but I can't remember when so I'm probably going to repeat myself.

When I first started to write scientific manuscripts I quite reasonably suffered the misunderstanding that you sort of did the experiments you planned and then wrote them (all of them) up in chronological order and badda boom, published it somewhere. That is because, I assume, many scientific manuscripts read as if that is how they were created. And there are probably some aspects of "Research Design 101" instruction that convinces young scientists that this is the way things work.

Then, when it is your own work, there are two additional factors that press down and shape your writing process. First, a sense of both pride and entitlement for your effort which tells your brain that surely every damn thing you worked on needs to fuel a publication. Second, a sense that writing is hard and you want to know in advance exactly what to write so that no effort is wasted.

"Wasted".

And this is where the creative arts come in.

Now, I've never lived cheek by jowl with a creative artist and I am only superficially familiar with what they do. But I am pretty convinced it is an iterative, inefficient process. Flat art folks seem to sketch. A lot. They work on an eye. An expression. A composition. A leg. Apple. Pair of shoes. Novelists and short story authors work on themes. characters. plot elements. They write and tear their hair out. Some of this is developing skill, sure, but much of this for a reasonably mature creative person is just working the job. They create copious amounts of material that is only leading up to the final product.

And the final product, I surmise, is built from the practice elements. A plot or character for a story. A curve of a mouth for a portrait. Melody. Chord progressions. A painted sunbeam. The artist starts stitching together a complete work out of elements.

I think you need to get into this mode as a scientist who is writing up manuscripts.

We stitch together a work out of elements as well. Now in our case, the elements are not made up. They are data. That we've collected. And we spend a heck of a lot of time on the quality of those elements. But eventually, we need to tell a story from those parts.

N.b. This is not storyboarding. Storyboarding is setting out the story you want to tell and then later going out and creating the elements (aka, figures) that you need to tell this particular story. That way lies fraud.

The creative process is looking at the elements of truth that you have available to you, from your labors to create good data, and then trying to see how they fit together into a story.

The transition that one has to make as a scientist is the ability to work with the elements, put in serious labor trying to fit them together, and then being willing to scrap the effort and start over. I think that if you don't get in there and do the work writing, writing, writing and analyzing and considering what the data are telling you, you make less progress.

Because the alternative is paralyzing. The alternative is that you keep putting off the creative process until something tells you how to write "efficiently". Maybe it is that you are waiting for just the right experimental result to clarify a murky situation. Maybe you are waiting for your PI or collaborator or fellow trainee to tell you what to do, what to write, how to structure the paper.

I suppose it may look like this to a relatively inexperienced writer of manuscripts? That its a bit daunting and that if only the PI would say the right words that somehow it would be magically easy to "efficiently" write up the paper in the right way that she expects?

When I hear generic muttering from trainees about frustration with insufficient feedback from a mentor I sometimes wonder if this is the problem. An over expectation of specific direction on what to write, how to write and what the story is.

The PI, of course, wants the trainee to take their own shot at telling the story. Whereupon they will promptly red pen the hell out of all that "work" and tell the trainee to rewrite most of it and take a totally different tack. Oh, and run these two more experiments. And then the trainee wonders "why didn't my PI tell me what she wanted in the first place instead of wasting my time??? GAh, I have the worst possible mentor!"

I realized within the past year or so that I have the same problem that I have criticized on the blog for years now. I tell new professors that they need to get away from the bench as quickly as possible and that this is not their job anymore. I tell them they have to find a way to get productivity out of their staff and that doing experiments is not their job anymore. I never had this problem as a transitioning scientist...I was fine getting away from the bench**.

But my equivalent is data analysis. And I'm not talking high falutin' stuff that only I can do, either. I want to see the data! Study by study. As it rolls in, even. I want to examine it, roll it around in it. Create graphs and run some stats. Think about what it means and how it fits into my developing understanding of a research direction in our laboratory. I can't wait to think about how this new figure might fit into one of our ongoing creative works...i.e., a manuscript.

I cannot give it up.

I create a lot of sketches, half plotted stories and cartoon panels. Elements. Themes. Drafts.

Many of these will never go into any published manuscript. If lucky some of these building blocks will make their way into a slide presentation or a into a grant as preliminary data. I never feel as though the effort is wasted, however. Making these bits and pieces is, to me, what allows me to get from here to there. From blank page to published manuscript.

Ideally, as I am supposedly training people to become independent scientists, I would like to train them to do this in the way that I do. And to get there, I have to get them across the hurdle of the creative artist. I have to get them to see that just rolling up your sleeves and doing the work is a necessary part of the process. You cannot be told a route, or receive a Revelation, that makes the process of creating a scientific manuscript efficient. You have to work on the elements. Make the sketches. Flesh out the plotlines.

And then be willing to scrap a bunch of "work" because it is not helping you create the final piece.

__
*I have a friend that is behind the camera on teevee shows. Big name teevee shows that you've heard of and watch. I see his work and I'm not really Seeing. His. Work. But this guy casually takes a few vacation pictures and I'm amazed at his eye, composition, etc. He doesn't seem to even consider himself a still camera artist, acts like he considers himself barely a hobbyist at that! So clearly I'm missing something about moving picture photography.

**I'm not actually a bench scientist, the ~equivalent.

8 responses so far

Theological waccaloons win because they are powered by religious fervor and exhaust normal people

Feb 14 2018 Published by under Open Access, Peer Review, Scientific Publication

Some self-congratulatory meeting of the OpenAccess Illuminati* took place recently and a summary of takeaway points has been posted by Stephen Curry (the other one).

These people are exhausting. They just keep bleating away with their talking points and refuse entirely to ever address the clear problems with their plans.

Anonymous peer review exists for a reason.

To hear them tell it, the only reason is so hateful incompetent reviewers can prevent their sterling works of genius from being published right away.

This is not the reason for having anonymous peer review in science.

Their critics regularly bring up the reason we have anonymous peer review and the virtues of such an approach. The OA Illuminati refuse to address this. At best they will vaguely acknowledge their understanding of the issue and then hand wave about how it isn't a problem just ...um...because they say so.

It's also weird that 80%+ of their supposed problems with peer review as we know it are attributable to their own participation in the Glamour Science game. Some of them also see problems with GlamHumping but they never connect the dots to see that Glamming is the driver of most of their supposed problems with peer review as currently practiced.

Which tells you a lot about how their real goals align with the ones that they talk about in public.

Edited to add:
Professor Curry weighed in on twitter to insist that the goal is not to force everyone to sign reviews. See, his plan allows people to opt out if they choose. This is probably even worse for the goal of getting an even-handed and honest review of scientific papers. And even more tellingly, is designing the experiment so that it cannot do anything other than provide evidence in support of their hypothesis. Neat trick.

Here's how it will go down. People will sign their reviews when they have "nice, constructive" things to say about the paper. BSDs, who are already unassailable and are the ones self-righteously saying they sign all their reviews now, will continue to feel free to be dicks. And the people** who feel that attaching their name to their true opinion will still feel pressure. To not review, to soft-pedal and sign or to supply an unsigned but critical review. All of this is distorting.

Most importantly for the open-review fans, it will generate a record of signed reviews that seem wonderfully constructive or deserved (the Emperor's, sorry BSDs, critical pants are very fine indeed) and a record of seemingly unconstructive critical unsigned reviews (which we can surely dismiss because they are anonymous cowards). So you see? It proves the theory! Open reviews are "better" and anonymous reviews are mean and unjustified. It's a can't-miss bet for these people.

The choice to not-review is significant. I know we all like to think that "obvious flaws" would occur to anyone reading a paper. That's nonsense. Having been involved in manuscript and grant review for quite some time now I am here to tell you that the assigned reviewers (typically 3) all provide unique insight. Sometimes during grant review other panel members see other things the three assigned people missed and in manuscript review the AE or EIC see something. I'm sure you could do parallel sets of three reviewers and it would take quite a large sample before every single concern has been identified. Comparing this experience to the number of comments that are made in all of the various open-commenting systems (PubMed Commons commenting system was just shuttered for lack of general interest by the way) and we simply cannot believe claims that any reviewer can be omitted*** with no loss of function. Not to mention the fact that open commenting systems are just as subject to the above discussed opt-in problems as are signed official review systems of peer review.
__
*hosted at HHMI headquarters which I’m sure tells us nothing about the purpose

**this is never an all-or-none associated with reviewer traits. It will be a manuscript-by-manuscript choice process which makes it nearly impossible to assess the quelling and distorting effect this will have on high quality review of papers.

***yes, we never have an overwhelmingly large sample of reviewers. The point here is the systematic distortion.

33 responses so far

NIH encourages pre-prints

In March of 2017 the NIH issued a notice on Reporting Preprints and Other Interim Research Products (NOT-OD-17-050): "The NIH encourages investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work.".

The key bits:

Interim Research Products are complete, public research products that are not final.

A common form is the preprint, which is a complete and public draft of a scientific document. Preprints are typically unreviewed manuscripts written in the style of a peer-reviewed journal article. Scientists issue preprints to speed dissemination, establish priority, obtain feedback, and offset publication bias.

Another common type of interim product is a preregistered protocol, where a scientist publicly declares key elements of their research protocol in advance. Preregistration can help scientists enhance the rigor of their work.

I am still not happy about the reason this happened (i.e., Glam hounds trying to assert scientific priority in the face of the Glam Chase disaster they themselves created) but this is now totally beside the point.

The NIH policy (see OpenMike blog entry for more) has several implications for grant seekers and grant holders which are what form the critical information for your consideration, Dear Reader.

I will limit myself here to materials that are related to standard paper publishing. There are also implications for materials that would never be published (computer code?) but that is beyond the scope for today's discussion.

At this point I will direct you to bioRxiv and PsyRxiv if you are unfamiliar with some of the more popular approaches for pre-print publication of research manuscripts.

The advantages to depositing your manuscripts in a pre-print form are all about priority and productivity, in my totally not humble opinion. The former is why the Glamour folks are all a-lather but priority and scooping affect all of us a little differently. As most of you know, scooping and priority is not a huge part of my professional life but all things equal, it's better to get your priority on record. In some areas of science it is career making/breaking and grant getting/rejecting to establish scientific priority. So if this is a thing for your life, this new policy allows and encourages you to take advantage.

I'm more focused on productivity. First, this is an advantage for trainees. We've discussed the tendency of new scientists to list manuscripts "in preparation" on their CV or Biosketch (for fellowship applications, say, despite it being technically illegal). This designation is hard to evaluate. A nearing-defense grad student who has three "in prep" manuscripts listed on the CV can appear to be bullshitting you. I always caution people that if they list such things they had better be prepared to send a prospective post-doc supervisor a mostly-complete draft. Well, now the pre-print allows anyone to post "in preparation" drafts so that anyone can verify the status. Very helpful for graduate students who have a short timeline versus the all too typical cycle of submission/rejection/resubmission/revision, etc. More importantly, the NIH previously frowned on listing "in preparation" or "in review" items on the Biosketch. This was never going to result in an application being returned unreviewed but it could sour the reviewers. And of course any rule followers out there would simply not list any such items, even if there was a minor revision being considered. With pre-print deposition and the ability to list on a NIH biosketch and cite in the Research Plan there is no longer any vaporware type of situation. The reviewer can look at the pre-print and judge the science for herself.

This applies to junior PIs as well. Most likely, junior PIs will have fewer publications, particularly from their brand new startup labs. The ability of the PI to generate data from her new independent lab can be a key issue in grant review. As with the trainee, the cycle of manuscript review and acceptance is lengthy compared with the typical tenure clock. And of course many junior PIs are trying to balance JIF/Glam against this evidence of independent productivity. So pre-print deposition helps here.

A very similar situation can apply to us not-so-junior PIs who are proposing research in a new direction. Sure, there is room for preliminary data in a grant application but the ability to submit data in manuscript format to the bioRxiv or some such is unlimited! Awesome, right?

15 responses so far

Does it matter how the data are collected?

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don't.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn't matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn't matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden's top-ranked university was just sacked because of the Macchiarini scandal

13 responses so far

No, Cell, the replication does not have bearing on the original fraud

Sep 12 2016 Published by under Scientific Misconduct, Scientific Publication

Via the usual relentless trolling of YHN from Comrade PhysioProffe, a note on a fraud investigation from the editors of Cell.

We, the editors of Cell, published an Editorial Expression of Concern (http://dx.doi.org/10.1016/j.cell.2016.03.038) earlier this year regarding issues raised about Figures 2F, 2H, and 3G of the above article.
...
two labs have now completed their experiments, and their data largely confirm the central conclusions drawn from the original figures. Although this does not resolve the conflicting claims, based on the information available to us at this time, we will take no further action. We would like to thank the independent labs who invested significant time and effort in ensuring the accuracy of the scientific record.

Bad Cell. BAD!

We see this all the time, although usually it is the original authors aided and abetted by the journal Editors, rather than the journal itself, making this claim. No matter if it is a claim to replace an "erroneous placeholder figure", or a full on retraction by the "good" authors for fraud perpetrated by some [nonWestern] postdoc who cannot be located anymore, we see an attempt to maintain the priority claim. "Several labs have replicated and extended our work", is how it goes if the paper is an old one. "We've replicated the bad [nonWestern, can't be located] postdoc's work" if the paper is newer.

I say "aided and abetted" because the Editors have to approve the language of the authors' erratum, corrigendum or retraction notice. They permit this. Why? Well obviously because just as the authors need to protect their reputation, so does the journal.

So everyone plays this game that somehow proving the original claims were correct, reliable or true means that the original offense is lesser. And that the remaining "good" authors and the journal should get credited for publishing it.

I say this is wrong. If the data were faked, the finding was not supported. Or not supported to the degree that it would have been accepted for publication in that particular journal. And therefore there should be no credit for the work.

We all know that there is a priority and Impact Factor chase in certain types of science. Anything published in Cell quite obviously qualifies for the most cutthroat aspects of this particular game. Authors and editors alike are complicit.

If something is perceived to be hott stuff, both parties are motivated to get the finding published. First. Before those other guys. So...corners are occasionally cut. Authors and Editors both do this.

Rewarding the high risk behavior that leads to such retractions and frauds is not a good thing. While I think punishing proven fraudsters is important, it does not by any means go far enough.

We need to remove the positive reward environment. Look at it this way. If you intentionally fake data, or more likely subsets of the data, to get past that final review hurdle into a Cell acceptance, you are probably not very likely to get caught. If you are detected, it will often take years for this to come to light, particularly when it comes to a proven-beyond-doubt standard. In the mean time, you have enjoyed all the career benefits of that Glamour paper. Job offers for the postdocs. Grant awards for the PIs. Promotions. High $$ recruitment or retention packages. And generated even more Glam studies. So in the somewhat unlikely case of being busted for the original fake many of the beneficiaries, save the poor sucker nonWestern postdoc (who cannot be located), are able to defend and evade based on stature.

This gentleman's agreement to view faked results that happen to replicate as no-harm, no-foul is part of this process. It encourages faking and fraud. It should be stopped.

One more interesting part of this case. It was actually raised by the self-confessed cheater!

Yao-Yun Liang of the above article informed us, the Cell editors, that he manipulated the experiments to achieve predetermined results in Figures 2F, 2H, and 3G. The corresponding author of the paper, Xin-Hua Feng, has refuted the validity of Liang’s claims, citing concerns about Liang’s motives and credibility. In a continuing process, we have consulted with the authors, the corresponding author’s institution, and the Committee on Publication Ethics (COPE), and we have evaluated the available original data. The Committee on Scientific Integrity at the corresponding author’s institution, Baylor College of Medicine, conducted a preliminary inquiry that was inconclusive and recommended no further action. As the institution’s inquiry was inconclusive and it has been difficult to adjudicate the conflicting claims, we have provided the corresponding author an opportunity to arrange repetition of the experiments in question by independent labs.

Kind of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. Cohn in the Mt. Sinai case, i.e., that the lab head created a culture of fraud or directly requested the fake? In the latter case, it looked like they probably only came down on the PI because of a smoking-gun email and the perceived credibility of the witnesses. Remember that ORI refused to take up the case so there probably was very little hard evidence on which to proceed. I'd bet that an inability to get beyond "he-said/he-said" is probably at the root of Baylor's "inconclusive" preliminary inquiry result for this Liang/Feng dispute.

33 responses so far

Older posts »