Archive for the 'Conduct of Science' category

Completely uncontroversial PI comment on paper writing

Sep 29 2016 Published by under Careerism, Conduct of Science

Go!

18 responses so far

The NIH has shifted from being an investor in research to a consumer of research

WOW. This comment from dsks absolutely nails it to the wall.

The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.

Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.

Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).

NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.

I have never seen this so clearly, so thanks to dsks for expressing it.

38 responses so far

Professor fired for misconduct shoots Dean

From the NYT account of the shooting of Dennis Charney:

A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men

why? Presumably revenge for :

In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.

As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.

The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI  investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”

Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.

To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.

The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.

Ok, so that might be why ORI declined to pursue the case against Dr. Chao.

The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"

One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."

This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.

There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.

The final conclusion in the recommendations section deserves special attention:

"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."

This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.

If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.

But the PI is still the one at fault.

I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.

28 responses so far

Personal jihads and distinguishing better/worse science from wrong science

Jul 22 2016 Published by under Conduct of Science

This is relevant to posts by a Russ Poldrack flagellating himself for apparent methodological lapses in fMRI analysis.

The fun issue is as summarized in the recent post:

Student [commenter on original post] is exactly right that I have been a coauthor on papers using methods or reporting standards that I now publicly claim to be inappropriate. S/he is also right that my career has benefited substantially from papers published in high profile journals prior using these methods that I now claim to inappropriate. ... I am in agreement that some of my papers in the past used methods or standards that we would now find problematic...I also appreciate Student's frustration with the fact that someone like myself can become prominent doing studies that are seemingly lacking according to today's standards, but then criticize the field for doing the same thing.

I made a few comments on the Twitts to the effect that this is starting to smell of odious ladder pulling behavior.

One key point from the original post:

I would note that points 2-4 were basically standard practice in fMRI analysis 10 years ago (and still crop up fairly often today).

And now let us review the original critiques to which he is referring:

  • There was no dyslexic control group; thus, we don't know whether any improvements over time were specific to the treatment, or would have occurred with a control treatment or even without any treatment.
  • The brain imaging data were thresholded using an uncorrected threshold.
  • One of the main conclusions (the "normalization" of activation following training") is not supported by the necessary interaction statistic, but rather by a visual comparison of maps.
  • The correlation between changes in language scores and activation was reported for only one of the many measures, and it appeared to have been driven by outliers.

As I have mentioned on more than one occasion I am one that finds value in the humblest papers and in the single reported experiment. Often times it is such tiny, tiny threads of evidence that helps our science and the absence of any information on something whatever that hinders us.

I find myself mostly able to determine whether the proper controls were used. More importantly, I find myself more swayed by the strength of the data and the experiment presented than I am by the claims made in the Abstract or Discussion about the meaning of the reported work. I'd rather be in a state of "huh, maybe this thing might be true (or false), pending these additional controls that need to be done" then a state of "dammit, why is there no information whatsoever on this thing I want to know about right now".

Yes, absolutely, I think that there are scientific standards that should be generally adhered to. I think the PSY105: Experimental Design (or similar) principles regarding the perfect experiment should be taken seriously....as aspirations.

But I think the notion that you "can't publish that" because of some failure to attain the Gold Plated Aspiration of experimental design is stupid and harmful to science as a hard and fast rule. Everything, but everything, should be reviewed by the peers considering a manuscript for publication intelligently and thoughtfully. In essence, taken on it's merits. This is much as I take any published data on their own merits when deciding what I think they mean.

This is particularly the case when we start to think about the implications for career arcs and the limited resources that affect our business.

It is axiomatic that not everyone has the same interests, approaches and contingencies that affect their publication practices. This is a good thing, btw. In diversity there is strength. We've talked most recently around these parts about LPU incrementalism versus complete stories. We've talked about rapid vertical ascent versus riff-raff. Open Science Eleventy versus normal people. The GlamHounds versus small town grocers. ...and we almost invariably start in on how subfields differ in any of these discussions. etc.

Threaded through many of these conversations is the notion of gate keeping. Of defining who gets to play in the sandbox on the basis of certain standards for how they conduct their science. What tools they use. What problems they address. What journals are likely to take their work for publication.

The gates control the entry to paper publication, job appointment and grant funding, among other things. You know, really frickin important stuff.

Which means, in my not at all humble opinion, that we should think pretty hard about our behavior when it touches on this gate keeping.

We need to be very clear on when our jihadist "rules" for how science needs to be done affect right from wrong versus mere personal preference.

I do agree that we want to keep the flagrantly wrong out of the scientific record. Perhaps this is the issue with the triggering post on fMRI but the admission that these practices still continue casts some doubt in my mind. It seems more like a personal preference. Or a jihad.

I do not agree that we need to put in strong controls so that all of science adheres to our personal preferences. Particularly when our personal preferences are for laziness and reflect our unwillingness to synthesize multiple papers or to think hard about the nature of the evidence behind the Abstract's claim. Even more so when our personal preferences really are coming from a desire to winnow a competitive field and make our own lives easier by keeping out the riff raff.

9 responses so far

The many benefits of the LPU

A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified

...a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.

Sarewitz ends with an exhortation of sorts. To publish much less.

Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.

Interestingly, this "Price" he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes "an elitist".

Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.

Price was worried about "distinguished", but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the "merely good" scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol' Steve McKnight's complaints about the riff-raff to it.

Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.

In today's competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.

Investing more effort in fewer but 'more complete' publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (R. D. Vale Proc. Natl Acad. Sci. USA 112, 1343913446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics.

One Richard Ebright commented thusly:

Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of "minimum publishable units."

This is as wrong as wrong can be.

LPU: A Case Study

Let's take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as "This new street drug is 10,000 times more potent than morphine" [WaPo version].

Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.

In the delusional world of the "complete story" tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.

That is a whole lot of time, effort and money.

In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others' work in whole or in part.

Choices or assumptions made that lead to blind alleys will waste everyone's time equally.

Did I mention the funding yet?

Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:

1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]

2) There isn't anything new here. This is just a potent synthetic opiate, right? That's what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don't necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]

3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn't it be useful to know what they have found? Why IS it that W-18 is active in writhing...or can't this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn't have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]

Etcetera.

Science is iterative and collaborative.

It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.

Waiting while several groups pursue a supposed "complete story" in parallel only for one to "win" and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.

51 responses so far

A simple primer for those new to behavioral science assays

1) Behavior is plural

2) No behavioral assay is a simple readout of the function of your favorite nucleus, neuronal subpopulation, receptor subtype, intracellular protein or gene. 

9 responses so far

Santa Cruz Biotech fined, banned from animal use

May 20 2016 Published by under Animals in Research, Conduct of Science

The big news of the day is that Santa Cruz Biotech has been punished for their malfeasance.

Buzzfeed News reports:

After years of allegations of mistreated research goats and rabbits, a settlement agreement (pdf) announced late on Friday will put Santa Cruz Biotechnology out of the scientific antibody business. The company will also pay a $3.5 million fine, the largest ever issued for this type of violation.

The settlement is only three pages so go ahead and read it. It is pretty much to the point.

Santa Cruz Biotech neither admits nor denies the allegations, blah, blah, but it is settling. They are to be penalized $3.5 million dollars, payable by the end of May, 2016. Their animal welfare act registration is revoked effective Dec 31, 2016. They will not use any inventory of the blood or serum they have on hand collected prior to Aug 21, 2015 to make, sell, transport, etc anything from May 20, 2016 to Dec 31, 2016 (after which they still cannot, I assume, since the license will be revoked). They agree to cease all activity as a research facility and will request cancellation of their registration with APHIS as such as of May 31, 2016.

I don't know how easy it will be for the overall company to get around this by starting up some other entity, possibly off shore, but it sure as heck looks like Santa Cruz Biotech is out of business.

Hoo-ray!!

There are several specific allegations of animal use violations under the Animal Welfare Act at play. But for me there was one really big deal issue, I assume this was why the hammer came down so hard and why Santa Cruz Biotech decided they had no choice but to settle in this manner.

As Nature reported in early 2013, Santa Cruz Biotech hid an animal facility from Federal inspectors.

A herd of 841 goats has kicked up a stir for one of the world’s largest antibody suppliers after US agricultural officials found the animals — including 12 in poor health — in an unreported antibody production facility owned by California-based Santa Cruz Biotechnology.

“The existence of the site was denied even when directly asked” of employees during previous inspections, according to a US Department of Agriculture (USDA) report finalised on 7 December, 2012. But evidence gathered on a 31 October inspection suggested that an additional barn roughly 14 kilometres south of the company's main animal facility had been in use for at least two and a half years, officials said.

This is mind bogglingly bad, in my view. Obviously criminal behavior. The Nature bit described this as "another setback". To me this should have been game over right here. Obviously trying to cover up misuse of animals so my thought is that even if it worked, and you can't actually observe the misuse, well, "get Capone on taxes even if you can't prove the crime" theory.

But then there was more. In the midst of all the inspecting and reporting and what not....

In July 2015, the major antibody provider Santa Cruz Biotechnology owned 2,471 rabbits and 3,202 goats. Now the animals have vanished, according to the US Department of Agriculture (USDA).
...
the company seems to have done away with its entire animal inventory. When the USDA inspected the firm's California facility on 12 January, it found no animal-welfare violations, and listed “no animals present or none inspected”. USDA spokesman Ed Curlett says that no animals were present during the inspection.

The fate of the goats and rabbits is unclear. The company did not respond to questions about the matter, and David Schaefer, director of public relations for the law firm Covington & Burling in Washington DC, which is representing Santa Cruz Biotechnology, declined to comment on the animals’ fate.

This sounds like an outrage, I know. But the bottom line is that a company in good standing with animal use regulatory authorities could in fact decide to euthanize all of its animals. It could decide to transfer or sell them to someone else under the appropriate regulations and procedures. This is really suspicious that the company won't say what it did with the animals, but still.

It's the concealment of the animal facility mentioned in the Dec 7, 2012 report that is the major violation in my view. They deserve to be put out of business for that.

16 responses so far

Thought of the Day

May 13 2016 Published by under Conduct of Science, Ponder, Tribe of Science

I think I have made incremental progress in understanding you all "complete story" muppets and in understanding the source of our disagreement.

There are broader arcs of stories in scientific investigation. On this I think we all agree.

We would like to read the entire arc. On this, I think, we all agree.

The critical difference is this.

Is your main motivation that you want to read that story and find out where it goes?

Or is your main motivation that you want to be the one to discover, create and/or tell that story, all by your lonesome, so you get as much credit for it as possible?

While certainly subject to scientific ego, I conclude that I lean much more toward wanting to know the story than you "complete story" people do.

Conversely, I conclude that you "shows mechanism", "complete story" people lean towards your own ego burnishing for participation in telling the story than you do towards wanting to know how it all turns out as quickly as possible.

35 responses so far

Complete stories, demonstrations of mechanism and other embarrassing fictions

There's a new post up at The Ideal Observer.

Many times you find people talking about how many papers a scientist has published, but does anyone seriously think that that is a useful number? One major factor is that individual researchers and communities have dramatically different ideas about what constitutes a publication unit.

Go read and comment.

No responses yet

John Oliver Explains Science to Everyone

May 10 2016 Published by under Conduct of Science

5 responses so far

Older posts »