The NIH has shifted from being an investor in research to a consumer of research

(by drugmonkey) Sep 21 2016

WOW. This comment from dsks absolutely nails it to the wall.

The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.

Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.

Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).

NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.

I have never seen this so clearly, so thanks to dsks for expressing it.

34 responses so far

Repost: Keep the ball in play

(by drugmonkey) Sep 21 2016

This was originally posted 16 September, 2014.


We're at the point of the fiscal year where things can get really exciting. The NIH budget year ends Sept 30 and the various Institutes and Centers need to balance up their books. They have been funding grants throughout the year on the basis of the shifting sands of peer review with an attempt to use up all of their annual allocation on the best possible science.

Throughout the prior two Council rounds of the year, they have to necessarily be a bit conservative. After all, they don't know in the first Round if maybe they will have a whole bunch of stellar scores come in during the third Round. Some one-off funding opportunities are perhaps schedule for consideration only during the final Round. Etc.

Also, the amount of funding requested for each grant varies. So maybe they have a bunch of high scoring proposals that are all very inexpensive? Or maybe they have many in the early rounds of the year that are unusually large?

This means that come September, the ICs are sometimes sitting on unexpended funds and need to start picking up proposals that weren't originally slated to fund. Maybe it is a supplement, maybe it is a small mechanism like a R03 or R21. Maybe they will offer you 2 years of funding of an R01 proposed for 5. Maybe they will offer you half the budget you requested. Maybe they have all of a sudden discovered a brand new funding priority and the quickest way to hit the ground running is to pick something up with end-of-year funds.

Now obviously, you cannot game this out for yourself. There is no way to rush in a proposal at the end of the year (save for certain administrative supplements). There is no way for you to predict what your favorite IC is going to be doing in Sep- maybe they have exquisite prediction and always play it straight up by priority score right to the end, sticking within the lines of the Council rounds. And of course, you cannot assume lobbying some lowly PO for a pickup is going to work out for you.

There is one thing you can do, Dear Reader.

It is pretty simple. You cannot receive one of these end-of-year unexpected grant awards unless you have a proposal on the books and in play. That means, mostly, a score and not a triage outcome. It means, in a practical sense, that you had better have your JIT information all squared away because this can affect things. It means, so I hear, that this is FINALLY the time when your IC will quite explicitly look at overhead rates to see about total costs and screw over those evil bastiges at high overhead Universities that you keep ranting about on the internet. You can make sure you have not just an R01 hanging around but also a smaller mech like an R03 or R21.

It happens*. I know lots and lots of people who have received end-of-the-FY largesse that they were not expecting. Received this type of benefit myself. It happens because you have *tried* earlier in the year to get funding and have managed to get something sitting on the books, just waiting for the spotlight of attention to fall upon you.

So keep that ball in play, my friends. Keep submitting credible apps. Keep your Commons list topped off with scored apps.

__
*As we move into October, you can peruse SILK and RePORTER to see which proposals have a start date of Sep 30. Those are the end-of-year pickups.

h/t: some Reader who may or may not choose to self-identify 🙂

3 responses so far

Bring back the 2-3 year Developmental R01

(by drugmonkey) Sep 19 2016

The R21 Mechanism is called the Exploratory/Developmental mechanism. Says so right in the title.

NIH Exploratory/Developmental Research Grant Program ( Parent R21)

In the real world of NIH grant review, however, the "Developmental" part is entirely ignored in most cases. If you want a more accurate title, it should be:

NIH High Risk / High Reward Research Grant Program ( Parent R21)

This is what reviwers favor in my experiences sitting on panels and occasionally submitting an R21 app. Mine are usually more along the lines of developing a new line of research that I think is important rather than being truly "high risk/high reward".

And, as we all know, the R01 application (5 years, full modular at $250K per annum direct costs if you please) absolutely requires a ton of highly specific Preliminary Data.

So how are you supposed to Develop an idea into this highly specific Preliminary Data? Well, there's the R21, right? Says right in the title that it is Developmental.

But....it doesn't work in practice.

So the R01 is an alternative. After all it is the most flexible mechanism. You could submit an R01 for $25K direct costs for one year. You'd be nuts, but you could. Actually you could submit an R03 or R21 for one $25K module too, but with the R01 you would then have the option to put in a competitive renewal to continue the project along.

The only thing stopping this from being a thing is the study section culture that won't accept it. Me, I see a lot of advantages to using shorter (and likely smaller) R01 proposals to develop a new line of work. It is less risky than a 5 year R01, for those that focus on risk/$. It has an obvious path of continuation as a genuinely Developmental attempt. It is more flexible in scope and timing- perhaps what you really need is $100K per year for 3 years (like the old R21) for your particular type of research or job type. It doesn't come laden with quite the same "high risk, high reward" approach to R21 review that biases for flash over solid workmanlike substance.

The only way I see this working is to try it. Repeatedly. Settle in for the long haul. Craft your Specific Aims opening to explain why you are taking this approach. Take the Future Directions blurb and make it really sparkle. Think about using milestones and decision points to convince the reviewers you will cut this off at the end if it isn't turning out to be that productive. Show why your particular science, job category, institute or resources match up to this idea.

Or you could always just shout aimlessly into the ether of social media.

36 responses so far

Can you train resilience into grad students or postdocs?

(by drugmonkey) Sep 13 2016

As I've noted on these pages before, my sole detectable talent for this career is the ability to take a punch.

There are a lot of punches in academic science. A lot of rejection and the congratulations for a job well done are few and far between. Nobody ever tells you that you are doing enough.

"Looking good, Assistant Professor! Just keep this up, maybe even chill a little now and then, and tenure will be no problem!" - said no Chair ever.

My concern is that resilience in the face of constant rejection, belittling and unkind comparisons of your science to the true rock stars in a Lake Wobegon approach can have a selection effect. Only certain personality types can stand this.

I happen to have one of these personality types but it is not something of any particular credit. I was born and/or made this way by my upbringing. I cannot say anyone helped to train me in this way as an academic scientist*.

So I am at a complete loss as to how to help my trainees with this.

Have you any insights Dear Reader? From your own development as a scientist or as a supervisor of other scientists?

Related Reading: Tales of postdocs past: what did I learn?
__
*well maybe indirectly. And not in a way I care to extend to any trainee of mine thankyewveerymuch.

71 responses so far

Does it matter how the data are collected?

(by drugmonkey) Sep 12 2016

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don't.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn't matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn't matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden's top-ranked university was just sacked because of the Macchiarini scandal

13 responses so far

No, Cell, the replication does not have bearing on the original fraud

(by drugmonkey) Sep 12 2016

Via the usual relentless trolling of YHN from Comrade PhysioProffe, a note on a fraud investigation from the editors of Cell.

We, the editors of Cell, published an Editorial Expression of Concern (http://dx.doi.org/10.1016/j.cell.2016.03.038) earlier this year regarding issues raised about Figures 2F, 2H, and 3G of the above article.
...
two labs have now completed their experiments, and their data largely confirm the central conclusions drawn from the original figures. Although this does not resolve the conflicting claims, based on the information available to us at this time, we will take no further action. We would like to thank the independent labs who invested significant time and effort in ensuring the accuracy of the scientific record.

Bad Cell. BAD!

We see this all the time, although usually it is the original authors aided and abetted by the journal Editors, rather than the journal itself, making this claim. No matter if it is a claim to replace an "erroneous placeholder figure", or a full on retraction by the "good" authors for fraud perpetrated by some [nonWestern] postdoc who cannot be located anymore, we see an attempt to maintain the priority claim. "Several labs have replicated and extended our work", is how it goes if the paper is an old one. "We've replicated the bad [nonWestern, can't be located] postdoc's work" if the paper is newer.

I say "aided and abetted" because the Editors have to approve the language of the authors' erratum, corrigendum or retraction notice. They permit this. Why? Well obviously because just as the authors need to protect their reputation, so does the journal.

So everyone plays this game that somehow proving the original claims were correct, reliable or true means that the original offense is lesser. And that the remaining "good" authors and the journal should get credited for publishing it.

I say this is wrong. If the data were faked, the finding was not supported. Or not supported to the degree that it would have been accepted for publication in that particular journal. And therefore there should be no credit for the work.

We all know that there is a priority and Impact Factor chase in certain types of science. Anything published in Cell quite obviously qualifies for the most cutthroat aspects of this particular game. Authors and editors alike are complicit.

If something is perceived to be hott stuff, both parties are motivated to get the finding published. First. Before those other guys. So...corners are occasionally cut. Authors and Editors both do this.

Rewarding the high risk behavior that leads to such retractions and frauds is not a good thing. While I think punishing proven fraudsters is important, it does not by any means go far enough.

We need to remove the positive reward environment. Look at it this way. If you intentionally fake data, or more likely subsets of the data, to get past that final review hurdle into a Cell acceptance, you are probably not very likely to get caught. If you are detected, it will often take years for this to come to light, particularly when it comes to a proven-beyond-doubt standard. In the mean time, you have enjoyed all the career benefits of that Glamour paper. Job offers for the postdocs. Grant awards for the PIs. Promotions. High $$ recruitment or retention packages. And generated even more Glam studies. So in the somewhat unlikely case of being busted for the original fake many of the beneficiaries, save the poor sucker nonWestern postdoc (who cannot be located), are able to defend and evade based on stature.

This gentleman's agreement to view faked results that happen to replicate as no-harm, no-foul is part of this process. It encourages faking and fraud. It should be stopped.

One more interesting part of this case. It was actually raised by the self-confessed cheater!

Yao-Yun Liang of the above article informed us, the Cell editors, that he manipulated the experiments to achieve predetermined results in Figures 2F, 2H, and 3G. The corresponding author of the paper, Xin-Hua Feng, has refuted the validity of Liang’s claims, citing concerns about Liang’s motives and credibility. In a continuing process, we have consulted with the authors, the corresponding author’s institution, and the Committee on Publication Ethics (COPE), and we have evaluated the available original data. The Committee on Scientific Integrity at the corresponding author’s institution, Baylor College of Medicine, conducted a preliminary inquiry that was inconclusive and recommended no further action. As the institution’s inquiry was inconclusive and it has been difficult to adjudicate the conflicting claims, we have provided the corresponding author an opportunity to arrange repetition of the experiments in question by independent labs.

Kind of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. Cohn in the Mt. Sinai case, i.e., that the lab head created a culture of fraud or directly requested the fake? In the latter case, it looked like they probably only came down on the PI because of a smoking-gun email and the perceived credibility of the witnesses. Remember that ORI refused to take up the case so there probably was very little hard evidence on which to proceed. I'd bet that an inability to get beyond "he-said/he-said" is probably at the root of Baylor's "inconclusive" preliminary inquiry result for this Liang/Feng dispute.

33 responses so far

NPR on trying to create DUI regulation for marijuana

(by drugmonkey) Sep 06 2016

NPR had a good segment on this today: The Difficulty Of Enforcing Laws Against Driving While High. Definitely well worth a listen.

I had a few reactions in a comment that ended up being post-length, so here you go.

The major discussion of the segment was two-fold and I think illustrates where policy based on the science can be helpful, even if only to point to what we need to know but do not at present.

The first point was that THC hangs around in the body for a very long time post-consumption, particularly in comparison with alcohol. Someone who is a long term chronic user can have blood THC levels that are...appreciable (no matter the particular threshold for presumed impairment, this is relevant). Some of the best data on this are from the laboratory of Marilyn Huestis when she was, gasp, an intramural investigator at NIDA! There are some attempts in the Huestis work to compare THC and metabolite ratios to determine recency of consumption-that's a good direction. IMO.

The second argument was about behavioral tolerance. One of the scientist interviewed was quoted along the lines of saying the relationship between blood levels, repetitive use and actual impairment was more linear for alcohol than for THC. Pretty much. There is some evidence for substantial behavioral tolerance, meaning even when acutely intoxicated, the chronic user may have relatively preserved performance versus the noob. There's a laboratory study here that makes the point fairly succinctly, even if the behavior itself isn't that complex. As a counterpoint, this recent human study fails to confirm behavioral tolerance in an acute dosing study (see Fig 4A for baseline THC by frequency of use, btw). As that NPR piece noted, it would be very valuable to get some rapid field screen for THC/driving - relevant impairment on a tablet.

11 responses so far

Pot Ponder

(by drugmonkey) Sep 06 2016

Five states have recreational marijuana legalization on the ballot this fall, if I heard correctly

I feel as though we should probably talk about this over the next couple of months. 

ETA:
Arizona

California

Maine

Massachusetts

Nevada

As most of you are aware, these follow successful recreational legalization initiatives in Washington (2012), Colorado (2012), Oregon (2014), Alaska (2014) and the District of Columbia (2014).

36 responses so far

More evidence of the generational screw job in NIH grant award

(by drugmonkey) Sep 02 2016

ScienceHound has posted a new analysis related to the NIH budget and award policy. He's been beavering away with mathematical models lately that are generally going to be beyond my ability to understand. In a tweet however, he made it pretty clear.

As expanded in his blog post:

The largest difference between the curves occurs at the beginning of the doubling period (1998-2003) where the model predicts a large increase in the number of grants that was not observed. This is due to the fact that NIH initiated a number of larger non–RPG-based programs when substantial new funding was available rather than simply funding more RPGs (although they did this to some extent). For example, in 1998, NIH invested $17 million through the Specialized Center–Cooperative Agreements (U54) mechanism. This grew to $146 million in 1999, $188 million in 2000, $298 million in 2001, $336 million in 2002, and $396 million in 2003. Note that the change each year matters for the number of new and competing grants that can be made because, for a given year, it does not matter whether funds have been previously committed to RPGs or to other mechanisms.

This interval of time, in my view, is right around when the first of the GenXers were getting (or should have been getting) appointed Assistant Professor. Certainly, YHN was appointed in this interval.

Let us recall a couple of graphs. First, this one:

The red trace depicts success rates from 1962 to 2008 for R01 equivalents (R01, R23, R29, R37). Note that they are not broken down by experienced/new investigators status, nor are new applications distinguished from competing continuation applications. The blue line shows total number of applications reviewed...which may or may not be of interest to you. [update 7/12/12: I forgot to mention that the data in the 60s are listed as "estimated" success rates.]

Ok, Ok, Not much to see here, right? The 30% success rate was about the same in the doubling period as it was in the 80s. Now view this broken down by noobs and experienced investigators.
RPGsuccessbyYear.png
source

As we know from prior posts, career-stage differences matter a LOT. In the 80s when the overall success rate was 30%, you can see that newcomers were at about 20% and established investigators were enjoying at least a 17%age point advantage (I think these data also conflate competing continuation with new applications so there's another important factor buried in the "Experienced" trace.) Nevertheless, since the Experienced/New gap was similar from 1980 to 2006, we can probably assume it held true prior to that interval as well.

Again, first time applicants had about the same lack of success in the 80s as they did in the early stages of the doubling (ok, actually a few points higher in the 80s). About 20%. Things didn't go severely into the tanker for the noobs until the end of the doubling around 2004. But think of the career arc. A person who started in the 80s with their first grant jumped up to enjoy 30% success rates and a climbing trend. Someone who managed to land a five year R01 in 2000, conversely, faced steeply declining success rates just when they were ready to get their next grant 4-5 years later.

This is for Research Project Grants (R01, R03, R15, R21, R22, R23, R29, R33, R34, R35, R36, R37, R55, R56, RC1, P01, P42, PN1, U01, U19, UC1) and does not refer to the Centers or U54 that ScienceHound discussed. Putting his analysis and insider explanation (if you don't know, ScienceHound was NIGMS Director from 2003-2010) to work, we can assume that these RPG or R01-equiv success rates would have been much higher during the doubling, save for the choice of NIH not to devote the full largesse to RPGs.

So. Instead of restoring experienced investigator success to where it had been during the early 80s and instead of finally (finally) doing something about noob-investigator success rates that had resulted in handwringing since literally the start of the NIH (ok, the 60s anyway) the NIH decided to spend money on boondoggles.

The NIH decided to assign a disproportionate share of the doubling to the very best funded institutions and scientists using mechanisms that were mostly peer reviewed by....the best funded scientists from the best-funded institutions. One of the CSR rules, after all, is that apps for a given mechanism should be reviewed mostly by those who have obtained such a mechanism. You have to have an R01 to be in a regular R01-reviewing panel and P50/P60/P01 are reviewed mostly by those who have been funded by such mechanisms.

One way to look at this is that a lot of the doubling was sequestered from the riff-raff by design.

This is part of the reason that Gen X will never live up to its scientific potential. The full benefit of the doubling was never made available to us in a competitive manner. Large-mech projects under the elite, older generation kept us shadowed. Maybe a couple of us* shared in the Big-Mechanism wealth in minor form but we were by no means ready to make a play to lead them and get the full benefit. Meantime, our measly R01 applications were being beat up mercilessly by the established and compared unfavorably to Senior PI apps supported by their multi-R01 and BigMech labs.

The story is not over.

Given that I grew up as a scientist in this era, and given that like most of us I was pretty ignorant of longitudinal funding trends, etc, my perception was that a Big Mech was...expected. As in eventually, we were supposed to get to the point where not just the very tippy-top best of us, but basically anyone with maybe top-25% verve and energy could land a BigMech. Maybe a P01 Program Project, maybe a Center. The Late-Boomers felt it too. I saw several of the late Boomers get into this mode right as the badness struck. They were semi-outraged, let me tell you, when the nearly universal Program Officer response was "We're not funding P01s anymore. We suggest you don't submit one.".

AYFK? For people who were used to hearing POs say "We advise you to revise and resubmit" at the drop of a hat and who had never been told by a PO not to try (with a half decent idea) this was quite surprising. Especially when they looked at the lucky ducks who had put their Big Mechs together just a few years before....well there was a lot of screaming about bias and unfairness at first.

P01s are relatively easy for Program to shut down. As always, YMMV when it comes to NIH matters. But in general, I'd say that P01s tended to be a lot more fluid** than Centers (P50/P60). Once a Big Hitter group got a-hold of a Center award, they tended to stay funded. For decades. IME, anyway. or in my perception, more accurately.

Take a look at the history of Program Projects versus Centers in your field / favorite ICs, DearReader and report back, eh?

Don't get me wrong. There is much to like about Program Projects and Centers. Done right, they can be very good at shepherding the careers of transitioning / new scientists. But they are profoundly undemocratic and tend to consolidate NIH funding in the hands of the few elite of the IC in question. Often times they appear to be less productive than those of us not directly in them would calculate "should" happen for the the same expenditure on R01s. Such complaints are both right and wrong and often simultaneously when it comes to the same Center award. It is something that depends on your perspective and what you value and/or predict as outcome.

I can think of precisely one GenX Center Director in the stable of my favorite ICs at the moment. No doubt there are more because I don't do exhaustive review and I don't recognize every name to put to a face right off if I were to go RePORTERing. But still. I can rattle off tons of Boomer and pre-Boomer Center Directors.

It goes back to a point I made in a prior post. Gen X scientists were not just severely filtered. Even the ones that managed to transition to faculty appointments were delayed at every step. Funding came harder and at a delay. Real purchasing power was reduced. Publication expectations went up. We were not ready and able to take up the reins of larger efforts to anywhere near the same extent when we approached mid career. We could not rely upon clockwork schedules of grant renewal. We could not expect that a high percentage of our new proposals would be funded. We did not have as extensive a run of successful individual productivity on which to base a stretch for BigMech science.

And this comes back to a phenomenon ScienceHound identifies. The NIH decided*** to put a disproportionate share of the doubling monies into Centers rather than R01s for the struggling new PIs. This had a very long tail of lasting effects.

__
*I certainly did.

**Note: The P01 is considered an RPG with the R01s, etc, but Centers are not. There is some floofraw about these being "different pots of money" from an appropriation standpoint. They are not directly substitutable in immediate priority, the way I hear it.

***Any NIH insiders that start in on how Congress tied their hands can stop before starting. Appropriations language involved back and forth with NIH, believe me.

18 responses so far

Professor fired for misconduct shoots Dean

(by drugmonkey) Aug 31 2016

From the NYT account of the shooting of Dennis Charney:

A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men

why? Presumably revenge for :

In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.

As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.

The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI  investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”

Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.

To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.

The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.

Ok, so that might be why ORI declined to pursue the case against Dr. Chao.

The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"

One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."

This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.

There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.

The final conclusion in the recommendations section deserves special attention:

"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."

This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.

If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.

But the PI is still the one at fault.

I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.

28 responses so far

Older posts »