Have you ever been reading a scientific paper and thought "Gee, they really should have cited us here"?
Have you ever been reading a scientific paper and thought "Gee, they really should have cited us here"?
From the NYT account of the shooting of Dennis Charney:
A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men
why? Presumably revenge for :
In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.
As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.
The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”
Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.
To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.
The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.
Ok, so that might be why ORI declined to pursue the case against Dr. Chao.
The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"
One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."
This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.
There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.
The final conclusion in the recommendations section deserves special attention:
"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."
This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.
If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.
But the PI is still the one at fault.
I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.
I think I have made incremental progress in understanding you all "complete story" muppets and in understanding the source of our disagreement.
There are broader arcs of stories in scientific investigation. On this I think we all agree.
We would like to read the entire arc. On this, I think, we all agree.
The critical difference is this.
Is your main motivation that you want to read that story and find out where it goes?
Or is your main motivation that you want to be the one to discover, create and/or tell that story, all by your lonesome, so you get as much credit for it as possible?
While certainly subject to scientific ego, I conclude that I lean much more toward wanting to know the story than you "complete story" people do.
Conversely, I conclude that you "shows mechanism", "complete story" people lean towards your own ego burnishing for participation in telling the story than you do towards wanting to know how it all turns out as quickly as possible.
Had an interesting category of thing happen on peer review of our work recently.
It was the species of reviewer objection where they know they can't lay a glove on you but they just can't stop themselves from asserting their disagreement.
It was in several different contexts and the details differed. But the essence was the same.
I'm just laughing.
I mean- why do we use language that identifies the weaknesses, limits or necessary caveats in our papers if it doesn't mean anything?
Saying "...and then there is this other possible interpretation" apparently enrages some reviewers that this possibility is not seen as a reason to prevent us from publishing the data.
Pointing out that these papers over here support one view of accepted interpretations/practices/understanding can trigger outrage that you don't ignore those in favor of these other papers over there and their way of doing things.
Identifying clearly and carefully why you made certain choices generates the most hilariously twisted "objective critiques" that really boil down to "Well I use these other models which are better for some reason I can't articulate."
Do you even scholarship, bro?
I mostly chuckle and move on, but these experiences do tie into Mike Eisen's current fevers about "publishing" manuscripts prior to peer review. So I do have sympathy for his position. It is annoying when such reviewer intransigence over non-universal interpretations is used to prevent publication of data. And it would sometimes be funny to have the "Your caveats aren't caveatty enough" discussion in public.
We have been talking about the scientific journal ecosphere in the context of Michael Eisen's push to get more biomedical scientists to use pre-print servers to publicize their work prior to publication in a traditional journal. This push, recently aided and abetted by Leslie Vosshall, exposes a deep divide in the understanding of the broad scope of science. It is my view that part of the reason the elite (both are HHMI funded investigators, eliteness gets no better in the US) have trouble understanding the points made by us riffraff is related to the fact they don't understand the following. The main issue is that the elite are working at the first tier level. Second tier is a function not of their science but of the competition for limited resources. Any farther down the chain and it is all the same to them - they really have no understanding of how life works for those who operate in the Tiers below.
This post originally appeared on the blog 11 Feb 2013.
More hilarious to me is the use of the word "tier". As in "The work from the prior interval of support was mostly published in second tier journals...".
It is almost always second tier that is used.
But this is never correct in my experience.
If we're talking Impact Factor (and these people are, believe it) then there is a "first" tier of journals populated by Cell, Nature and Science.
In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the "second tier".
A jump down to the IF 12 or so of PNAS most definitely represents a different "tier" if you are going to talk about meaningful differences/similarities in IF.
Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.
So for the most part when people are talking about "second tier journals" they are probably down at the FIFTH tier- 4-6 IF in my estimation.
I also argue that the run of the mill society level journals extend below this fifth tier to a "the rest of the pack" zone in which there is a meaningful perception difference from the fifth tier. So.... Six tiers.
Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn't it?)
So there you have it. If you* are going to use "tier" to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?
*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use "second tier journal" to refer to what is in the fifth or sixth tier of IFs. Always.
**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as "second tier". Go nuts.
Once again, my friends, a professor of science has been found to be harassing women underlings.
Actually it was a bit more:
Lieb allegedly made unwelcome sexual advances to several female graduate students on an off-campus retreat in Galena, Ill., and engaged in sexual activity with a student who was “incapacitated due to alcohol and therefore could not consent,” according to documents acquired by the New York Times.
Yeah, that last part there pretty much makes Jason Lieb a rapist.
Then it turns out that the guy had left Princeton rather abruptly:
Yoav Gilad, a molecular biologist at Chicago who was on the committee that advocated hiring Dr. Lieb, said he and his fellow faculty members knew that in February 2014 Dr. Lieb had abruptly resigned from Princeton University, just seven months after having been recruited from the University of North Carolina to run a high-profile genomics institute.
then it gets very foggy:
molecular biologists on the University of Chicago faculty and at other academic institutions received emails from an anonymous address stating that Dr. Lieb had faced allegations of sexual harassment or misconduct at previous jobs at Princeton and the University of North Carolina.
But Dr. Gilad said that when it was contacted, Princeton said there had been no sexual harassment investigation of Dr. Lieb while he was there. He said efforts to find out more about what prompted Dr. Lieb’s departure proved fruitless. A Princeton spokeswoman said the university does not comment on personnel matters.
hmmm. smells a little bit, doesn't it? But no PROOF. Because, no doubt, of the usual. Murky circumstances. Accusations that can't be proved easily. Wagon circling from the institution and lingering doubt that the accuser is telling the truth. We've seen it a million times.
Separately, Dr. Gilad acknowledged, during the interviews of Dr. Lieb, he admitted that he had had a monthslong affair with a graduate student in his laboratory at the University of North Carolina.
Ok, wait, whoa full stop. The dude had an affair with a graduate student IN HIS LAB!
Done. Right there. The vast majority of Universities that have policies at least say you can't have an affair with a student you have a direct supervisory role over.
Hiring committees are not courts of law and applicants do not have a right to be hired. This committee at U of C should have taken a pass as soon as they learned Lieb was screwing his graduate student.
There are a number of problems that we academics need to confront about this.
First, the guy raped an incapacitated grad student at a dept retreat. This has to put some courage into departments to lay down some rules during their retreats. Like maybe, no faculty partying with grad students after official hours, when the other faculty aren't around. Open the retreat with a discussion of harassment, respect and professional behavior like they do at GRCs now. That sort of thing.
Second, what in the hell do we do about these unproven cases in which the guy (it's almost always a man) who keeps jumping institutions leaves a smell of harassment and bad behavior behind him that hasn't been proven or documented?
It's weird, right?
If you take a rec letter from a trusted colleague about a prospective student or postdoc that has the slightest hint of a problem, professional work wise, you take an automatic pass. You move on to the next candidate. Nobody talks about lawyers and proof and how you "have" to hire this particular postdoc or they will sue you for defamation. Yet when it comes to a faculty hire, the stench of misconduct is treated differently. "well, it hasn't been proven! there's no paper trail! Sure he left in a hurry and the old institution ain't talking but its a coincidence! and we can't listen to these rumors from eight of his previous trainees who all tell the same tale, hearsay! we'll be sued for defamation if we choose not to hire him!".
Something is very wrong here.
We're perfectly okay not hiring a candidate because we suspect they won't like our town and will be soon looking to leave. Ok with violating HR rules to sniff around about a two-body problem and refuse to offer a faculty job to such a problem candidate. Underrepresented minorities? Don't even get me started. Women of childbearing age or with a young child? yeah. Our hiring committees do all kinds of inferring and gossiping and not-offering on the basis of suspected factors. Thinly evidenced. Not proven. Actually illegal reasons in some cases.
But when someone is rumoured to be a harasser of women? Geez, we have to bend over backwards to extend him his alleged right to the job.
Something is very wrong here.
People of science? Please. Just. Look. Somewhere. Else. Would. You? Please? Find your romantic entanglements outside of the workplace. Really. It cannot possibly be this difficult.
Should people without skin in the game be allowed to review major research grants?
I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists...
On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.
On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.
Would you prefer review by those who are subject to the funding system? Or doesn't it matter?
Found this on the Facebooks. It seems appropriate for a science-careers audience:
There was a farmer who grew excellent quality corn. Every year he won the award for the best grown corn. One year a newspaper reporter interviewed him and learned something interesting about how he grew it. The reporter discovered that the farmer shared his seed corn with his neighbors. “How can you afford to share your best seed corn with your neighbors when they are entering corn in competition with yours each year?” the reporter asked.
“Why sir,” said the farmer, “Didn’t you know? The wind picks up pollen from the ripening corn and swirls it from field to field. If my neighbors grow inferior corn, cross-pollination will steadily degrade the quality of my corn. If I am to grow good corn, I must help my neighbors grow good corn.”
— Drug Monkey (@drugmonkeyblog) December 31, 2015
#LabGoals2016 Solve nagging "that shouldn't do that" science question we've been struggling with for two years.
— Drug Monkey (@drugmonkeyblog) December 31, 2015
Sometweep or other mentioned career goals for the year. I don't really set lab goals at all....too busy just keeping on with whatever is in front of me? Maybe this is a bad idea?
I came up with the above as an off the cuff response.
Anyway, now I am curious if you set goals for yourself in the academic career and science profession space?
The SFN Annual Meeting is famous for the overwhelming barrage of science being fire-hosed at you. It is intimidating and can be impersonal.
Almost equally famous, particularly for the experienced hands, are the evening thematic socials. These are gatherings that may be focused on a scientific topic (Dopamine), University, lab (for the big ones), academic society (yes, the competition comes to SFN to troll for members) and/or organized by vendors (such as a journal/publisher).
Here is a list of the things I accomplished at one social this year:
-Talked with a colleague from whom I requested an emergency grant support letter just prior to the meeting. I explained the wheres/whys and thanked her profusely.
-Chatted with a colleague who is in semi-competition with one of our research domains. We worked some stuff out, talked a little about plans and I hope pre-empted what could have been bad feelings on one side or another.
-I met a junior scientist (that I didn't know except second hand) who had asked me for a letter of support for a grant application on the recommendation of a PO. This person told me more about the project and I was able to comment on a few things.
-Met a philanthropist who donated to a lab in which I have an interest. I kid you not.
-Chatted with a more-senior member of my field who is of pretty high stature in a subfield. I would not necessarily have gotten to know this investigator absent this particular SFN social over the past couple of years. This PI commented about my research directions in a thoughtful way that shows she actually knows me beyond social recognition.
-Met a postdoc who is nearing the job market in a subfield in which I have slightly better than average ear-tuning about job openings. I will be able to forward things that I hear about to this person now.
That's off the top of my head. I am sure there were several less-obviously work-related conversations that in fact have a work-related component to them.
So there are two points.
First, when you hear people talking about this or that fantastic party they attended at SFN, remember that these socials are there for work and career related purposes.
Second, the party that I am referring to was BANTER, organized by Scientopia's very own Dr Becca over the past five or six years. The organizing theme is not any of the usual one that you might think of as being specific to your career interests. It is based on the online science community, most especially the Twitter-based neuroscience community. It is not screened for any particular subdomain of neuroscience, including mine, and yet I had the above-mentioned interactions.
The implication* of this latter observation is that you can engage in useful work-related conversations at almost any SFN social, which means that it can be less forced. Go to the ones where you have the most interest, or an "in" or whatever. The key is to be....well....social.
*I think it also points to how firmly BANTER has become implanted on the SFN social map. Well done Dr Becca, well done.