Archive for the 'Scientific Publication' category

The many benefits of the LPU

A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified

...a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.

Sarewitz ends with an exhortation of sorts. To publish much less.

Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.

Interestingly, this "Price" he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes "an elitist".

Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.

Price was worried about "distinguished", but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the "merely good" scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol' Steve McKnight's complaints about the riff-raff to it.

Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.

In today's competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.

Investing more effort in fewer but 'more complete' publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (R. D. Vale Proc. Natl Acad. Sci. USA 112, 1343913446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics.

One Richard Ebright commented thusly:

Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of "minimum publishable units."

This is as wrong as wrong can be.

LPU: A Case Study

Let's take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as "This new street drug is 10,000 times more potent than morphine" [WaPo version].

Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.

In the delusional world of the "complete story" tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.

That is a whole lot of time, effort and money.

In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others' work in whole or in part.

Choices or assumptions made that lead to blind alleys will waste everyone's time equally.

Did I mention the funding yet?

Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:

1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]

2) There isn't anything new here. This is just a potent synthetic opiate, right? That's what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don't necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]

3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn't it be useful to know what they have found? Why IS it that W-18 is active in writhing...or can't this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn't have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]

Etcetera.

Science is iterative and collaborative.

It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.

Waiting while several groups pursue a supposed "complete story" in parallel only for one to "win" and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.

51 responses so far

JIF notes 2016

If it's late June, it must be time for the latest Journal Impact Factors to be announced. (Last year's notes are here.)

Nature Neuroscience is confirming its dominance over Neuron with upward and downward trends, respectively, widening the gap.

Biological Psychiatry continues to skyrocket, up to 11.2. All pretensions from Neuropsychopharmacology to keep pace are over, third straight year of declines for the ACNP journal lands it at 6.4. Looks like the 2011-2012 inflation was simply unsustainable for NPP. BP is getting it done though. No sign of a letup for the past 4 years. Nicely done BP and any of y'all who happen to have published there in the past half-decade.

I've been taking whacks at the Journal of Neuroscience all year so I almost feel like this is pile-on. But the long steady trend has dropped it below a 6, listed at 5.9 this year. Oy vey.

Looks like Addiction Biology has finally overreached with their JIF strategy. It jumped up to the 5.9 level 2012-2013 but couldn't sustain it- two consecutive years of declines lowers it to 4.5. Even worse, it has surrendered the top slot in the Substance Abuse category. As we know, this particular journal maintains an insanely long pre-print queue with some papers being assigned to print two whole calendar years after appearing online. Will anyone put up with this anymore, now that the JIF is declining and it isn't even the best-in-category anymore? I think this is not good for AB.

A number of journals in the JIF 4-6 category that I follow are holding steady over the past several years, that's good to see.

Probably the most striking observation is what appears to be a relatively consistent downward trend for JIF 2-4 journals that I watch. These were JIFs that have generally trended upward (slowly, slowly) from 2006 or so until the past couple of years. I assumed this was a reflection of more scientific articles being published and therefore more citations available. Perhaps this deflationary period is temporary. Or perhaps it reflects journals that I follow not keeping up with the times in terms of content?

As always, interested to hear what is going on with the journals in the fields you follow, folks. Have at it in the comments.

48 responses so far

Science magazine selects Jeremy Berg as EIC

May 25 2016 Published by under Science Publication, Scientific Publication

Well this is certainly exciting news!

Jeremy Berg, a biochemist and administrator at the University of Pittsburgh (Pitt) in Pennsylvania, will become the next editor-in-chief of Science magazine on 1 July. A former director of the National Institute of General Medical Sciences (NIGMS) who has a longstanding interest in science policy, Berg will succeed Marcia McNutt, who is stepping down to become president of the National Academy of Sciences.

I am a big fan of Jeremy Berg and his efforts to use data to drive policy when heading one of the NIH ICs and his efforts as a civilian to extract NIH grant data via FOIA requests for posting on his DataHound blog.

I look forward to a new era at Science magazine with an EIC who prefers that institutions make their decisions based on data and that they be as transparent as possible about their processes.

15 responses so far

More Neuroscience Smack

JNeuroJIF2014

I select these journals for comparison for a reason, of course. First, I'm in the addiction fields and Addiction Biology tops the JIF list of ISI Journal Citation Reports for the subcategory of Substance Abuse. Second, Biological Psychiatry and Neuropsychopharmacology publish a lot of behavioral pharmacology, another superset under which my work falls

The timeline is one of convenience, do note that I was in graduate school long before this.

When I entered graduate school, it was clear that publishing in the Journal of Neuroscience was considered something special. All the people presenting work from the platform at the Annual Meeting of the SfN were publishing relentlessly in JNeuro. People with posters drawing a crowd five people deep and spilling over the adjacent posters in an arc? Ditto.

I was in graduate school to study behavior, first, and something about the way the body accomplished these cool tasks second. This is still pretty much true, btw. For various reasons, I oriented toward the chemical communication and information transmission processes of the brain as my favored level of analysis. In short, I became a behavioral pharmacology person in orientation.

In behavioral pharmacology, the specificity of the analysis depends on three overarching factors. First, the components of the nervous system which respond to given drug molecules. Second, the specificity with which any given exogenous drug manipulation may act. Third, the regional constraints under which the drug manipulation is applied. By the time I entered graduate school, the scope of manipulations were relatively well developed. Sure, not all tools ended up having exactly the specificity that they were assumed to have. New receptor and transporter and intracellular chemical recognition sites were discovered frequently. Still are. But on the whole, we knew a lot about the interpretive space within which new experiments were being conducted.

I contrast this with lesion work. Because at the time I was in graduate school, there was another level of analysis that was also popular- the brain lesion. This related to a set of techniques in which regions of the brain were surgically deactivated/removed as the primary manipulation. The interpretive space tended to include fierce debate over the specificity with which the lesion had been produced. The physical area removed was rarely consistent in extent even within one study. Different approaches to the target might entail various collateral damages that were essentially ignored within a paper. The regions that were ablated contained, of course, a multitude of neuronal and glial subtypes and occasionally axonal tracts that were just passing through the neighborhood. Specificity was, in a word, poor.

I noticed very early in my days of grinding reading of my areas of interest that the Journal of Neuroscience just LOOOOOOOVED them a lesion study. And absolutely hated behavioral pharmacology.

I was, for a time, dismayed.

I couldn't believe it. The relative level of confidence in the claims versus the experimental evidence was ridiculously poor for lesions versus pharmacology. The designs were less comprehensive and less well controlled. The inconvenient bits of evidence provided early were entirely forgotten in a later rush to claim lesion/behavior impairment specificity. The rapid fire exchange of data in publications from the competing labs was exciting but really pointed out the flaws in the whole premise.

At the very least, you could trade one level of uncertainty of the behavioral pharmacology for an equally troublesome uncertainty in the lesion world.

It boggled my mind that one of these technique domains and levels of analysis was considered The Awesome for the flagship journal of the very prestigious and large Society for Neuroscience and the other was considered unworthy*.

Particularly when I would see the broad stretch of interpretive domains that enjoyed space and an audience at the Annual Meeting of the Society for Neuroscience. It did not escape my attention that the SfN was delighted to take dues and Annual Meeting fees from people conducting a whole host of neuroscience investigations (far, far beyond the subject of this post, btw. I have another whole rant on the topic of the behavioral specificity and lack thereof.) that would never be considered for publication in J Neuro on a categorical basis.

It has been a long time since my dawning realization of these issues and I have survived just fine, so far, doing the things that interest me in science. I may have published work once or twice in J Neuro but I generally do not, and can not. They are still no fans of what I think is most interesting in science.

It turns out that journals that are fans of behavioral pharmacology, see Figure above, do publish some of the stuff that I think is most interesting. They are accepting of levels of analysis that are most interesting to me, in addition to considerable overlap with the J Neuro-acceptable analyses of the present day. And as time has gone by, the JIF of these journals has risen while that of J Neuro has fallen. Debate the reasons for this as you like, we all know there are games to be played to change the JIF calculation. But ultimately, papers are cited or not and this has a role in driving the JIF.

I watch the JIF numbers for a whole host of journals that publish a lot more pedestrian work than these journals do as well. The vast majority are on slight upward trends. More science is being published and more citations are available for distribution, so this makes a lot of sense.

J Neuro tends to stand out as the only one on a long and steady downward trend.

If J Neuro doesn't halt this slide, it will end up down in the weeds of the 3-5 JIF range pretty soon. It will have a LOT more company down there. And it's pretensions to being the venue for the very best neuroscience work will be utterly over.

I confess I am a little bit sad about this. It is very hard to escape the imprinting of my undergraduate and graduate school education years. Not too sad, mind you, I definitely enjoy the schadenfreude of their demise.

But I am a little sad. This Journal is supposed to be awesome in my mind. It still publishes a lot of good stuff. And it deserves a lot of credit for breaking the Supplemental Materials cycle a few years ago. I still like the breadth and excitement of the SfN Annual Meeting which gives me a related warm fuzzy for the Journal.

But still. If they go down they have nothing but themselves to blame. And I'm okay being the natterer who gets to sneer that he told em so.

__
*There is an argument to be made, one that is made by many, that the real problem at J Neuro is not the topic domains, per se, but rather a broader issue of the insider club that runs SfN and therefore the Journal**. I am not sure I really care about this too much because the result is the same.

**One might observe that publications which appear to be exceptions to the technique-domain rules usually come with insider-club authors.

29 responses so far

Complete stories, demonstrations of mechanism and other embarrassing fictions

There's a new post up at The Ideal Observer.

Many times you find people talking about how many papers a scientist has published, but does anyone seriously think that that is a useful number? One major factor is that individual researchers and communities have dramatically different ideas about what constitutes a publication unit.

Go read and comment.

No responses yet

Review unto others

I think I've touched on this before but I'm still seeking clarity.

How do you review?

For a given journal, let's imagine this time, that you sometimes get manuscripts rejected from and sometimes get acceptances.

Do you review manuscripts for that journal as you would like to be reviewed?

Or as you have perceived yourself to have been reviewed?

Do you review according to your own evolved wisdom or with an eye to what you perceive the Editorial staff of the journal desire?

31 responses so far

Revise After Rejection

This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.

There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.

It's the sort of thing you only need to do as a response to review when it has a chance of acceptance.

Outrageous claims that are going to be bait for any reviewer? Sure, back those down.

17 responses so far

Bias at work

A piece in Vox summarizes a study from Nextions showing that lawyers are more critical of a brief written by an African-American. 

I immediately thought of scientific manuscript review and the not-unusual request to have a revision "thoroughly edited by a native English speaker". My confirmation bias suggests that this is way more common when the first author has an apparently Asian surname.

It would be interesting to see a similar balanced test for scientific writing and review, wouldn't it?

My second thought was.... Ginther. Is this not another one of the thousand cuts contributing to African-American PIs' lower success rates and need to revise the proposal extra times? Seems as though it might be. 

22 responses so far

Strategic advice

Reminder for when you are submitting your manuscript to a dump journal.

Many of the people involved with what you consider to be a dump journal* for your work may not see it as quite so lowly a venue as you do.

This includes the AEs and reviewers, possibly the Editor in Chief as well. 

Don't patronize them. 
___

*again, this is descriptive and not pejorative in my use. A semi respectable place where you can get a less than perfect manuscript published without too much hassle**.

**you hope.

43 responses so far

A new way to publish your dataset #OA waccaloons!

Elsevier has a new ....journal? I guess that is what it is.

Data in Brief

From the author guidelines:

Data in Brief provides a way for researchers to easily share and reuse each other's datasets by
publishing data articles that:

Thoroughly describe your data, facilitating reproducibility. Make your data, which is often buried in supplementary material, easier to find. Increase traffic towards associated research articles and data, leading to more citations. Open up doors for new collaborations.
Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.

At the moment they only list Section Editors in Proteomics, Materials Science, Molecular Phylogenetics and Evolution, Engineering and Genomics. So yes, there will apparently be peer review of these datasets:

Because Data in Brief articles are pure descriptions of data they are reviewed differently than a typical research article. The Data in Brief peer review process focuses on data transparency.

Reviewers review manuscripts based on the following criteria:
Do the description and data make sense? Do the authors adequately explain its utility to the community? Are the protocol/references for generating data adequate? Data format (is it standard? potentially re-usable?) Does the article follow the Data in Brief template? Is the data well documented?

Data in Brief that are converted supplementary files submitted alongside a research article via another Elsevier journal are editorially reviewed....

Wait. What's this part now?

Here's what the guidelines at a regular journal, also published by Elsevier, have to say about the purpose of Data in Brief:

Authors have the option of converting any or all parts of their supplementary or additional raw data into one or multiple Data in Brief articles, a new kind of article that houses and describes their data. Data in Brief articles ensure that your data, which is normally buried in supplementary material, is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. Authors are encouraged to submit their Data in Brief article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your Data in Brief article will automatically be transferred over to Data in Brief where it will be editorially reviewed and published in the new, open access journal, Data in Brief. Please note an open access fee is payable for publication in Data in Brief.

emphasis added.

So, for those of you that want to publish the data underlying your regular research article, instead of having it go unheeded in a Supplementary Materials pdf you now have the opportunity to pay an Open Access fee to get yourself a DOI for it.

20 responses so far

Older posts »