Archive for the 'Conduct of Science' category

Thought of the Day

Jan 04 2017 Published by under Conduct of Science

21 responses so far

Tenured profs should pick up the check?

Jan 03 2017 Published by under Academics, Conduct of Science

While I think generosity on the part of more senior scientists is a good thing, and should be encouraged, making this an obligation is flawed. How do you know what that person's obligations are?

I post this in case any PI types out there don't know this is a thing. If you can pick up a check or pay more than your share, hey great. Good for you.

But nobody should expect it of you.

26 responses so far

Finishing projects

If you are paid by the taxpayers, or generous private philanthropists, of your country to do science, you owe them a product. An attempt to generate knowledge. This is one of the things that orients much of my professional behavior, as I think I make clear on this blog.

If you haven't published your scientific work, it doesn't exist. This is perhaps an excessive way to put it but I do think you should try to publish the work you accomplish with other people's money.

Much of my irritation with the publication game, prestige chasing, delusions of complete stories, priority / scooping fears and competition for scarce funding resources can be traced back to these two orienting principles of mine.

My irritation with such things does not, however, keep them from influencing my career. It does not save me from being pressured not to give the funders their due.

It is not unusual for my lab, and I suspect many labs, to have thrown a fair amount of effort and resources into a set of investigations and to realize a lot more will be required to publish. "Required", I should say because the threshold for publication is highly variable.

Do I throw the additional resources into an effort to save what is half or three-quarters of a paper? To make the project to date publishable? I mean, we already know the answer and it is less than earth shaking. It was a good thing to look into, of course. Years ago a study section of my peers told us so to the tune of a very low single digit percentile on a grant application. But now I know the answer and it probably doesn't support a lot of follow-up work.

Our interests in the lab have moved along on several different directions. We have new funding and, always, always, future funding to pursue. Returning to the past is just a drag on the future, right?

I sometimes feel that nobody other than me is so stupid as to remember that I owe something. I was funded by other people's money to follow a set of scientific inquiries into possible health implications of several things. I feel as though I should figure out how to publish the main thing(s) we learned. Even if that requires some additional studies be run to make something that I feel is already answered into something "publishable".

21 responses so far

First rule of Science Mentor Club

The very first rule of PI/mentorship is get your trainees first author publications.

This is the thing of biggest lasting career impact that you can determine almost with absolute control.

Yes, things happen but if you are not getting the vast majority of your trainees first author pubs you are screwing up as a mentor.

So. 2017 is about to start. Do you have a publication plan for all of your postdocs and later-stage graduate students?

Obviously I am in favor of active management of trainees' publishing plans. I assume some favor a more hands-off approach?

"Let the postdoc figure it out" has an appeal. Makes them earn those pubs and sets them up for later hard times.

The problem is, if they fail to get a publication, or enough, their career takes a bad hit. So ability to grunt it out isn't ever used.

42 responses so far

Completely uncontroversial PI comment on paper writing

Sep 29 2016 Published by under Careerism, Conduct of Science

Go!

46 responses so far

The NIH has shifted from being an investor in research to a consumer of research

WOW. This comment from dsks absolutely nails it to the wall.

The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.

Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.

Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).

NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.

I have never seen this so clearly, so thanks to dsks for expressing it.

38 responses so far

Professor fired for misconduct shoots Dean

From the NYT account of the shooting of Dennis Charney:

A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men

why? Presumably revenge for :

In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.

As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.

The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI  investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”

Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.

To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.

The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.

Ok, so that might be why ORI declined to pursue the case against Dr. Chao.

The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"

One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."

This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.

There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.

The final conclusion in the recommendations section deserves special attention:

"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."

This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.

If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.

But the PI is still the one at fault.

I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.

29 responses so far

Personal jihads and distinguishing better/worse science from wrong science

Jul 22 2016 Published by under Conduct of Science

This is relevant to posts by a Russ Poldrack flagellating himself for apparent methodological lapses in fMRI analysis.

The fun issue is as summarized in the recent post:

Student [commenter on original post] is exactly right that I have been a coauthor on papers using methods or reporting standards that I now publicly claim to be inappropriate. S/he is also right that my career has benefited substantially from papers published in high profile journals prior using these methods that I now claim to inappropriate. ... I am in agreement that some of my papers in the past used methods or standards that we would now find problematic...I also appreciate Student's frustration with the fact that someone like myself can become prominent doing studies that are seemingly lacking according to today's standards, but then criticize the field for doing the same thing.

I made a few comments on the Twitts to the effect that this is starting to smell of odious ladder pulling behavior.

One key point from the original post:

I would note that points 2-4 were basically standard practice in fMRI analysis 10 years ago (and still crop up fairly often today).

And now let us review the original critiques to which he is referring:

  • There was no dyslexic control group; thus, we don't know whether any improvements over time were specific to the treatment, or would have occurred with a control treatment or even without any treatment.
  • The brain imaging data were thresholded using an uncorrected threshold.
  • One of the main conclusions (the "normalization" of activation following training") is not supported by the necessary interaction statistic, but rather by a visual comparison of maps.
  • The correlation between changes in language scores and activation was reported for only one of the many measures, and it appeared to have been driven by outliers.

As I have mentioned on more than one occasion I am one that finds value in the humblest papers and in the single reported experiment. Often times it is such tiny, tiny threads of evidence that helps our science and the absence of any information on something whatever that hinders us.

I find myself mostly able to determine whether the proper controls were used. More importantly, I find myself more swayed by the strength of the data and the experiment presented than I am by the claims made in the Abstract or Discussion about the meaning of the reported work. I'd rather be in a state of "huh, maybe this thing might be true (or false), pending these additional controls that need to be done" then a state of "dammit, why is there no information whatsoever on this thing I want to know about right now".

Yes, absolutely, I think that there are scientific standards that should be generally adhered to. I think the PSY105: Experimental Design (or similar) principles regarding the perfect experiment should be taken seriously....as aspirations.

But I think the notion that you "can't publish that" because of some failure to attain the Gold Plated Aspiration of experimental design is stupid and harmful to science as a hard and fast rule. Everything, but everything, should be reviewed by the peers considering a manuscript for publication intelligently and thoughtfully. In essence, taken on it's merits. This is much as I take any published data on their own merits when deciding what I think they mean.

This is particularly the case when we start to think about the implications for career arcs and the limited resources that affect our business.

It is axiomatic that not everyone has the same interests, approaches and contingencies that affect their publication practices. This is a good thing, btw. In diversity there is strength. We've talked most recently around these parts about LPU incrementalism versus complete stories. We've talked about rapid vertical ascent versus riff-raff. Open Science Eleventy versus normal people. The GlamHounds versus small town grocers. ...and we almost invariably start in on how subfields differ in any of these discussions. etc.

Threaded through many of these conversations is the notion of gate keeping. Of defining who gets to play in the sandbox on the basis of certain standards for how they conduct their science. What tools they use. What problems they address. What journals are likely to take their work for publication.

The gates control the entry to paper publication, job appointment and grant funding, among other things. You know, really frickin important stuff.

Which means, in my not at all humble opinion, that we should think pretty hard about our behavior when it touches on this gate keeping.

We need to be very clear on when our jihadist "rules" for how science needs to be done affect right from wrong versus mere personal preference.

I do agree that we want to keep the flagrantly wrong out of the scientific record. Perhaps this is the issue with the triggering post on fMRI but the admission that these practices still continue casts some doubt in my mind. It seems more like a personal preference. Or a jihad.

I do not agree that we need to put in strong controls so that all of science adheres to our personal preferences. Particularly when our personal preferences are for laziness and reflect our unwillingness to synthesize multiple papers or to think hard about the nature of the evidence behind the Abstract's claim. Even more so when our personal preferences really are coming from a desire to winnow a competitive field and make our own lives easier by keeping out the riff raff.

9 responses so far

The many benefits of the LPU

A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified

...a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.

Sarewitz ends with an exhortation of sorts. To publish much less.

Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.

Interestingly, this "Price" he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes "an elitist".

Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.

Price was worried about "distinguished", but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the "merely good" scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol' Steve McKnight's complaints about the riff-raff to it.

Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.

In today's competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.

Investing more effort in fewer but 'more complete' publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (R. D. Vale Proc. Natl Acad. Sci. USA 112, 1343913446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics.

One Richard Ebright commented thusly:

Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of "minimum publishable units."

This is as wrong as wrong can be.

LPU: A Case Study

Let's take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as "This new street drug is 10,000 times more potent than morphine" [WaPo version].

Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.

In the delusional world of the "complete story" tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.

That is a whole lot of time, effort and money.

In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others' work in whole or in part.

Choices or assumptions made that lead to blind alleys will waste everyone's time equally.

Did I mention the funding yet?

Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:

1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]

2) There isn't anything new here. This is just a potent synthetic opiate, right? That's what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don't necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]

3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn't it be useful to know what they have found? Why IS it that W-18 is active in writhing...or can't this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn't have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]

Etcetera.

Science is iterative and collaborative.

It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.

Waiting while several groups pursue a supposed "complete story" in parallel only for one to "win" and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.

51 responses so far

A simple primer for those new to behavioral science assays

1) Behavior is plural

2) No behavioral assay is a simple readout of the function of your favorite nucleus, neuronal subpopulation, receptor subtype, intracellular protein or gene. 

9 responses so far

Older posts »