When do I get to that stage where my lab is operated entirely by oppressed trainees who are totally doing the PI job in all ways but name and I get to sit back, eat bonbons and watch my h-index rise?
Archive for the 'Conduct of Science' category
First, I shouldn't have to remind you all that much about a simple fact of nature in the academic crediting system. Citations matter. Our quality and status as academic scientists will be judged, in small or in large ways, by the citations that our own publications garner.
This is not to say the interpretation of citations is all the same because it most assuredly is not. Citation counting leads to all sorts of distilled measures across your career arc- Highly Cited and the h-index are two examples. Citation counting can be used to judge the quality of your individual paper as well- from the total number of cites, to the sustained citing across the years to the impressive-ness of the journals in which your paper has been cited.
Various stakeholders may disagree over which measure of citation of your work is most critical.
On one thing everyone agrees.
One problem (out of many) with the "Supplementary Materials", that are now very close to required at some journals and heavily encouraged at others, is that they are ignored by the ISI's Web of Science indexing and, so far as I can tell, Google Scholar.
So, by engaging in this perverted system by which journals are themselves competing with each other, you* are robbing your colleagues of their proper due.
Nat observed that you might actually do this intentionally, if you are a jerk.
So now, not only can supplementary info be used as a dumping ground for your inconclusive or crappy data, but you can also stick references to your competitors in there and shaft them their citations.
Try not to be a jerk. Resist this Supplementary Materials nonsense. Science will be the better for it.
*yes, this includes me. I just checked some Supplementary citations that we've published to see if either ISI or Google Scholar indexes them- they do not.
For those who wonder where this idea came from, please see the commentary by Deputy Director Tabak and Director Collins (Nature 505, 612–613, January 2014) on the issue of the reproducibility of results. One part of the commentary suggests that scientists may be tempted to overstate conclusions in order to get papers published in high profile journals. The commentary adds “NIH is contemplating modifying the format of its ‘biographical sketch’ form, which grant applicants are required to complete, to emphasize the significance of advances resulting from work in which the applicant participated, and to delineate the part played by the applicant. Other organizations such as the Howard Hughes Medical Institute have used this format and found it more revealing of actual contributions to science than the traditional list of unannotated publications.”
Here's Collins and Tabak, 2014 in freely available PMC format. The lead in to the above referenced passage is:
Perhaps the most vexed issue is the academic incentive system. It currently overemphasizes publishing in high-profile journals. No doubt worsened by current budgetary woes, this encourages rapid submission of research findings to the detriment of careful replication. To address this, the NIH is contemplating...
Hmmm. So by changing this, the ability on grant applications to say something like:
"Yeah, we got totally scooped out of a Nature paper because we didn't rush some data out before it was ready but look, our much better paper that came out in our society journal 18 mo later was really the seminal discovery, we swear. So even though the entire world gives primary credit to our scoopers, you should give us this grant now."
is supposed to totally alter the dynamics of the "vexed issue" of the academic incentive system.
Right guys. Right.
Everyone who is more approving or lenient than you are is an incompetent moron.
Everyone that is harsher or less enthusiastic is a total jackhole.
I had a revelation that clarified some of my points of poor understanding of the science crowdfunding enthusiast position.
In skirmishing on Twitter with some guy associated with "Experiment.com" I ran across a project on brain inflammatory responses in a fetal alcohol model from the Lindquist lab. Something I can readily assess, being that it is a substance abuse, drug-toxicity investigation in rats.
Continue Reading »
I ran across a curious finding in a very Glamourous publication. Being that it was in a CNS journal, the behavior sucked. The data failed to back up the central claim about that behavior*. Which was kind of central to the actual scientific advance of the entire work.
So I contemplated an initial, very limited check on the behavior. A replication of the converging sort.
It's going to cost me about $15K to do it.
If it turns out negative, then where am I? Where am I going to publish a one figure tut-tut negative that flies in the face of a result published in CNS?
If it turns out positive, this is almost worse. It's a "yeah we already knew that from this CNS paper, dumbass" rejection waiting to happen.
Either way, if I expect to be able to publish in even a dump journal I'm gong to need to throw some more money at the topic. I'd say at least $50K.
Spent from grants that are not really related to this topic in any direct way.
If the NIH is serious about the alleged replication problem then it needs to be serious about the costs and risks involved.
*a typical problem with CNS pubs that involve behavioral studies.
It's a pretty interesting viewpoint on basic science, translation to humans and what we do when an emergency situation like an infectious disease outbreak happens. I have been struck in past days about the huge international discussion this ZMapp treatment has been sparking. As you might expect, we have dark thoughts being expressed along the lines of "Why does this apparently miraculous treatment emerge all of a sudden when Americans are infected but it hasn't been given to suffering Africans, hmmmm?". There are all kinds of ethical issues to think about.
The television version linked below is 5 minutes but be sure to click on the link to the "midday edition" which is a longer voice interview. It gives a much fuller discussion.
David Kroll on ZMapp
It is your job as a scientist to read the literature, keep abreast of findings of interest and integrate this knowledge with your own work.
We have amazing tools for doing so that were not available in times past, everything gets fantastically better all the time.
If you are a PI you even have minions to help you! And colleagues! And manuscripts and grants to review which catch you up.
So I ask you, people who spout off about the "filter" problem.....
What IS the nature of this problem? How does it affect your working day?
Since most of you deploy this in the context of wanting fewer papers to be published in fewer journals...how is that better? What is supposed to disappear from your view?
The stuff that you happen not to be interested in?
Dear Editor Whitehare,
Do you really expect us to complete the additional experiments that Reviewer #3 insisted were necessary? You DO realize that if we did those experiments the paper would be upgraded enough that we sure as hell would be submitting it upstream of your raggedy ass publication, right?
Ok, ok, I have no actual data on this. But if I had to pick one thing in substance abuse science that has been most replicated it is this.
If you surgically implant a group of rats with intravenous catheters, hook them up to a pump which can deliver small infusions of saline adulterated with cocaine HCl and make these infusions contingent upon the rat pressing a lever...
Rats will intravenously self-administer (IVSA) cocaine.
This has been replicated ad nauseum.
If you want to pass a fairly low bar to demonstrate you can do a behavioral study with accepted relevance to drug abuse, you conduct a cocaine IVSA study [Wikipedia] in rats. Period.
And yet. There are sooooo many ways to screw it up and fail to replicate the expected finding.
Note that I say "expected finding" because we must include significant quantitative changes along with the qualitative ones.
Off the top of my head, the types of factors that can reduce your "effect" to a null effect, change the outcome to the extent even a statistically significant result isn't really the effect you are looking for, etc
- Catheter diameter or length
- Cocaine dose available in each infusion
- Rate of infusion/concentration of drug
- Sex of the rats
- Age of rats
- Strain of the rats
- Vendor source (of the same nominal strain)
- Time of day in which rats are run (not just light/dark* either)
- Food restriction status
- Time of last food availability
- Pair vs single housing
- "Enrichment" that is called-for in default guidelines for laboratory animal care and needs special exception under protocol to prevent.
- Experimenter choice of smelly personal care products
- Dirty/clean labcoat (I kid you not)
- Handling of the rats on arrival from vendor
- Cage-change day
- Minor rat illness
- Location of operant box in the room (floor vs ceiling, near door or away)
- Ambient temperature of vivarium or test room
- Schedule- weekends off? seven days a week?
- Schedule- 1 hr? 2hr? 6 hr? access sessions
- Schedule- are reinforcer deliveries contingent upon one lever press? five? does the requirement progressively increase with each successive infusion?
- Animal loss from the study for various reasons
As you might expect, these factors interact with each other in the real world of conducting science. Some factors you can eliminate, some you have to work around and some you just have to accept as contributions to variability. Your choices depend, in many ways, on your scientific goals beyond merely establishing the IVSA of cocaine.
Up to this point I'm in seeming agreement with that anti-replication yahoo, am I not? Jason Mitchell definitely agrees with me that there are a multitude of ways to come up with a null result.
I am not agreeing with his larger point. In fact, quite the contrary.
The point I am making is that we only know this stuff because of attempts to replicate! Many of these attempts were null and/or might be viewed as a failure to replicate some study that existed prior to the discovery that Factor X was actually pretty important.
Replication attempts taught the field more about the model, which allowed investigators of diverse interests to learn more about cocaine abuse and, indeed, drug abuse generally.
The heavy lifting in discovering the variables and outcomes related to rat IVSA of cocaine took place long before I entered graduate school. Consequently, I really can't speak to whether investigators felt that their integrity was impugned when another study seemed to question their own work. I can't speak to how many "failure to replicate" studies were discussed at conferences and less formal interactions. But given what I do know about science, I am confident that there was a little bit of everything. Probably some accusations of faking data popped up now and again. Some investigators no doubt were considered generally incompetent and others were revered (sometimes unjustifiably). No doubt. Some failures to replicate were based on ignorance or incompetence...and some were valid findings which altered the way the field looked upon prior results.
Ultimately the result was a good one. The rat IVSA model of cocaine use has proved useful to understand the neurobiology of addiction.
The incremental, halting, back and forth methodological steps along the path of scientific exploration were necessary for lasting advance. Such processes continue to be necessary in many, many other aspects of science.
Replication is not an insult. It is not worthless or a-scientific.
Replication is the very lifeblood of science.
*rats are nocturnal. check out how many studies**, including behavioral ones, are run in the light cycle of the animal.
**yes to this very day, although they are certainly less common now