Archive for the 'Scientific Publication' category

Creative artists and the writing of scientific manuscripts

I am a consumer of the creative arts and, really, have always been in awe of creative artists. Looking back chronologically over my lifetime, my greatest consumption and appreciation has been fiction writing, music and cartooning (particularly the political variety). I'm not a big fan of flat art (sculpture speaks to me much more) but I am definitely amazed by what some people can paint, draw and the like. I do like moving picture arts but I don't think I have any particular sense of awe for them as a craft and certainly not for the participants as creative artists*. I get that others can see this, however.

Anyway, the creative artists are amazing to me.

A couple of days ago it occurred to me that understanding the process of creative arts might help cross what I find to be a somewhat frustrating bridge in training other people to write scientific manuscripts.

Sidebar: I am pretty sure we've discussed related topics before on the blog, but I can't remember when so I'm probably going to repeat myself.

When I first started to write scientific manuscripts I quite reasonably suffered the misunderstanding that you sort of did the experiments you planned and then wrote them (all of them) up in chronological order and badda boom, published it somewhere. That is because, I assume, many scientific manuscripts read as if that is how they were created. And there are probably some aspects of "Research Design 101" instruction that convinces young scientists that this is the way things work.

Then, when it is your own work, there are two additional factors that press down and shape your writing process. First, a sense of both pride and entitlement for your effort which tells your brain that surely every damn thing you worked on needs to fuel a publication. Second, a sense that writing is hard and you want to know in advance exactly what to write so that no effort is wasted.

"Wasted".

And this is where the creative arts come in.

Now, I've never lived cheek by jowl with a creative artist and I am only superficially familiar with what they do. But I am pretty convinced it is an iterative, inefficient process. Flat art folks seem to sketch. A lot. They work on an eye. An expression. A composition. A leg. Apple. Pair of shoes. Novelists and short story authors work on themes. characters. plot elements. They write and tear their hair out. Some of this is developing skill, sure, but much of this for a reasonably mature creative person is just working the job. They create copious amounts of material that is only leading up to the final product.

And the final product, I surmise, is built from the practice elements. A plot or character for a story. A curve of a mouth for a portrait. Melody. Chord progressions. A painted sunbeam. The artist starts stitching together a complete work out of elements.

I think you need to get into this mode as a scientist who is writing up manuscripts.

We stitch together a work out of elements as well. Now in our case, the elements are not made up. They are data. That we've collected. And we spend a heck of a lot of time on the quality of those elements. But eventually, we need to tell a story from those parts.

N.b. This is not storyboarding. Storyboarding is setting out the story you want to tell and then later going out and creating the elements (aka, figures) that you need to tell this particular story. That way lies fraud.

The creative process is looking at the elements of truth that you have available to you, from your labors to create good data, and then trying to see how they fit together into a story.

The transition that one has to make as a scientist is the ability to work with the elements, put in serious labor trying to fit them together, and then being willing to scrap the effort and start over. I think that if you don't get in there and do the work writing, writing, writing and analyzing and considering what the data are telling you, you make less progress.

Because the alternative is paralyzing. The alternative is that you keep putting off the creative process until something tells you how to write "efficiently". Maybe it is that you are waiting for just the right experimental result to clarify a murky situation. Maybe you are waiting for your PI or collaborator or fellow trainee to tell you what to do, what to write, how to structure the paper.

I suppose it may look like this to a relatively inexperienced writer of manuscripts? That its a bit daunting and that if only the PI would say the right words that somehow it would be magically easy to "efficiently" write up the paper in the right way that she expects?

When I hear generic muttering from trainees about frustration with insufficient feedback from a mentor I sometimes wonder if this is the problem. An over expectation of specific direction on what to write, how to write and what the story is.

The PI, of course, wants the trainee to take their own shot at telling the story. Whereupon they will promptly red pen the hell out of all that "work" and tell the trainee to rewrite most of it and take a totally different tack. Oh, and run these two more experiments. And then the trainee wonders "why didn't my PI tell me what she wanted in the first place instead of wasting my time??? GAh, I have the worst possible mentor!"

I realized within the past year or so that I have the same problem that I have criticized on the blog for years now. I tell new professors that they need to get away from the bench as quickly as possible and that this is not their job anymore. I tell them they have to find a way to get productivity out of their staff and that doing experiments is not their job anymore. I never had this problem as a transitioning scientist...I was fine getting away from the bench**.

But my equivalent is data analysis. And I'm not talking high falutin' stuff that only I can do, either. I want to see the data! Study by study. As it rolls in, even. I want to examine it, roll it around in it. Create graphs and run some stats. Think about what it means and how it fits into my developing understanding of a research direction in our laboratory. I can't wait to think about how this new figure might fit into one of our ongoing creative works...i.e., a manuscript.

I cannot give it up.

I create a lot of sketches, half plotted stories and cartoon panels. Elements. Themes. Drafts.

Many of these will never go into any published manuscript. If lucky some of these building blocks will make their way into a slide presentation or a into a grant as preliminary data. I never feel as though the effort is wasted, however. Making these bits and pieces is, to me, what allows me to get from here to there. From blank page to published manuscript.

Ideally, as I am supposedly training people to become independent scientists, I would like to train them to do this in the way that I do. And to get there, I have to get them across the hurdle of the creative artist. I have to get them to see that just rolling up your sleeves and doing the work is a necessary part of the process. You cannot be told a route, or receive a Revelation, that makes the process of creating a scientific manuscript efficient. You have to work on the elements. Make the sketches. Flesh out the plotlines.

And then be willing to scrap a bunch of "work" because it is not helping you create the final piece.

__
*I have a friend that is behind the camera on teevee shows. Big name teevee shows that you've heard of and watch. I see his work and I'm not really Seeing. His. Work. But this guy casually takes a few vacation pictures and I'm amazed at his eye, composition, etc. He doesn't seem to even consider himself a still camera artist, acts like he considers himself barely a hobbyist at that! So clearly I'm missing something about moving picture photography.

**I'm not actually a bench scientist, the ~equivalent.

7 responses so far

Theological waccaloons win because they are powered by religious fervor and exhaust normal people

Feb 14 2018 Published by under Open Access, Peer Review, Scientific Publication

Some self-congratulatory meeting of the OpenAccess Illuminati* took place recently and a summary of takeaway points has been posted by Stephen Curry (the other one).

These people are exhausting. They just keep bleating away with their talking points and refuse entirely to ever address the clear problems with their plans.

Anonymous peer review exists for a reason.

To hear them tell it, the only reason is so hateful incompetent reviewers can prevent their sterling works of genius from being published right away.

This is not the reason for having anonymous peer review in science.

Their critics regularly bring up the reason we have anonymous peer review and the virtues of such an approach. The OA Illuminati refuse to address this. At best they will vaguely acknowledge their understanding of the issue and then hand wave about how it isn't a problem just ...um...because they say so.

It's also weird that 80%+ of their supposed problems with peer review as we know it are attributable to their own participation in the Glamour Science game. Some of them also see problems with GlamHumping but they never connect the dots to see that Glamming is the driver of most of their supposed problems with peer review as currently practiced.

Which tells you a lot about how their real goals align with the ones that they talk about in public.

Edited to add:
Professor Curry weighed in on twitter to insist that the goal is not to force everyone to sign reviews. See, his plan allows people to opt out if they choose. This is probably even worse for the goal of getting an even-handed and honest review of scientific papers. And even more tellingly, is designing the experiment so that it cannot do anything other than provide evidence in support of their hypothesis. Neat trick.

Here's how it will go down. People will sign their reviews when they have "nice, constructive" things to say about the paper. BSDs, who are already unassailable and are the ones self-righteously saying they sign all their reviews now, will continue to feel free to be dicks. And the people** who feel that attaching their name to their true opinion will still feel pressure. To not review, to soft-pedal and sign or to supply an unsigned but critical review. All of this is distorting.

Most importantly for the open-review fans, it will generate a record of signed reviews that seem wonderfully constructive or deserved (the Emperor's, sorry BSDs, critical pants are very fine indeed) and a record of seemingly unconstructive critical unsigned reviews (which we can surely dismiss because they are anonymous cowards). So you see? It proves the theory! Open reviews are "better" and anonymous reviews are mean and unjustified. It's a can't-miss bet for these people.

The choice to not-review is significant. I know we all like to think that "obvious flaws" would occur to anyone reading a paper. That's nonsense. Having been involved in manuscript and grant review for quite some time now I am here to tell you that the assigned reviewers (typically 3) all provide unique insight. Sometimes during grant review other panel members see other things the three assigned people missed and in manuscript review the AE or EIC see something. I'm sure you could do parallel sets of three reviewers and it would take quite a large sample before every single concern has been identified. Comparing this experience to the number of comments that are made in all of the various open-commenting systems (PubMed Commons commenting system was just shuttered for lack of general interest by the way) and we simply cannot believe claims that any reviewer can be omitted*** with no loss of function. Not to mention the fact that open commenting systems are just as subject to the above discussed opt-in problems as are signed official review systems of peer review.
__
*hosted at HHMI headquarters which I’m sure tells us nothing about the purpose

**this is never an all-or-none associated with reviewer traits. It will be a manuscript-by-manuscript choice process which makes it nearly impossible to assess the quelling and distorting effect this will have on high quality review of papers.

***yes, we never have an overwhelmingly large sample of reviewers. The point here is the systematic distortion.

26 responses so far

NIH encourages pre-prints

In March of 2017 the NIH issued a notice on Reporting Preprints and Other Interim Research Products (NOT-OD-17-050): "The NIH encourages investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work.".

The key bits:

Interim Research Products are complete, public research products that are not final.

A common form is the preprint, which is a complete and public draft of a scientific document. Preprints are typically unreviewed manuscripts written in the style of a peer-reviewed journal article. Scientists issue preprints to speed dissemination, establish priority, obtain feedback, and offset publication bias.

Another common type of interim product is a preregistered protocol, where a scientist publicly declares key elements of their research protocol in advance. Preregistration can help scientists enhance the rigor of their work.

I am still not happy about the reason this happened (i.e., Glam hounds trying to assert scientific priority in the face of the Glam Chase disaster they themselves created) but this is now totally beside the point.

The NIH policy (see OpenMike blog entry for more) has several implications for grant seekers and grant holders which are what form the critical information for your consideration, Dear Reader.

I will limit myself here to materials that are related to standard paper publishing. There are also implications for materials that would never be published (computer code?) but that is beyond the scope for today's discussion.

At this point I will direct you to bioRxiv and PsyRxiv if you are unfamiliar with some of the more popular approaches for pre-print publication of research manuscripts.

The advantages to depositing your manuscripts in a pre-print form are all about priority and productivity, in my totally not humble opinion. The former is why the Glamour folks are all a-lather but priority and scooping affect all of us a little differently. As most of you know, scooping and priority is not a huge part of my professional life but all things equal, it's better to get your priority on record. In some areas of science it is career making/breaking and grant getting/rejecting to establish scientific priority. So if this is a thing for your life, this new policy allows and encourages you to take advantage.

I'm more focused on productivity. First, this is an advantage for trainees. We've discussed the tendency of new scientists to list manuscripts "in preparation" on their CV or Biosketch (for fellowship applications, say, despite it being technically illegal). This designation is hard to evaluate. A nearing-defense grad student who has three "in prep" manuscripts listed on the CV can appear to be bullshitting you. I always caution people that if they list such things they had better be prepared to send a prospective post-doc supervisor a mostly-complete draft. Well, now the pre-print allows anyone to post "in preparation" drafts so that anyone can verify the status. Very helpful for graduate students who have a short timeline versus the all too typical cycle of submission/rejection/resubmission/revision, etc. More importantly, the NIH previously frowned on listing "in preparation" or "in review" items on the Biosketch. This was never going to result in an application being returned unreviewed but it could sour the reviewers. And of course any rule followers out there would simply not list any such items, even if there was a minor revision being considered. With pre-print deposition and the ability to list on a NIH biosketch and cite in the Research Plan there is no longer any vaporware type of situation. The reviewer can look at the pre-print and judge the science for herself.

This applies to junior PIs as well. Most likely, junior PIs will have fewer publications, particularly from their brand new startup labs. The ability of the PI to generate data from her new independent lab can be a key issue in grant review. As with the trainee, the cycle of manuscript review and acceptance is lengthy compared with the typical tenure clock. And of course many junior PIs are trying to balance JIF/Glam against this evidence of independent productivity. So pre-print deposition helps here.

A very similar situation can apply to us not-so-junior PIs who are proposing research in a new direction. Sure, there is room for preliminary data in a grant application but the ability to submit data in manuscript format to the bioRxiv or some such is unlimited! Awesome, right?

15 responses so far

Does it matter how the data are collected?

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don't.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn't matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn't matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden's top-ranked university was just sacked because of the Macchiarini scandal

13 responses so far

No, Cell, the replication does not have bearing on the original fraud

Sep 12 2016 Published by under Scientific Misconduct, Scientific Publication

Via the usual relentless trolling of YHN from Comrade PhysioProffe, a note on a fraud investigation from the editors of Cell.

We, the editors of Cell, published an Editorial Expression of Concern (http://dx.doi.org/10.1016/j.cell.2016.03.038) earlier this year regarding issues raised about Figures 2F, 2H, and 3G of the above article.
...
two labs have now completed their experiments, and their data largely confirm the central conclusions drawn from the original figures. Although this does not resolve the conflicting claims, based on the information available to us at this time, we will take no further action. We would like to thank the independent labs who invested significant time and effort in ensuring the accuracy of the scientific record.

Bad Cell. BAD!

We see this all the time, although usually it is the original authors aided and abetted by the journal Editors, rather than the journal itself, making this claim. No matter if it is a claim to replace an "erroneous placeholder figure", or a full on retraction by the "good" authors for fraud perpetrated by some [nonWestern] postdoc who cannot be located anymore, we see an attempt to maintain the priority claim. "Several labs have replicated and extended our work", is how it goes if the paper is an old one. "We've replicated the bad [nonWestern, can't be located] postdoc's work" if the paper is newer.

I say "aided and abetted" because the Editors have to approve the language of the authors' erratum, corrigendum or retraction notice. They permit this. Why? Well obviously because just as the authors need to protect their reputation, so does the journal.

So everyone plays this game that somehow proving the original claims were correct, reliable or true means that the original offense is lesser. And that the remaining "good" authors and the journal should get credited for publishing it.

I say this is wrong. If the data were faked, the finding was not supported. Or not supported to the degree that it would have been accepted for publication in that particular journal. And therefore there should be no credit for the work.

We all know that there is a priority and Impact Factor chase in certain types of science. Anything published in Cell quite obviously qualifies for the most cutthroat aspects of this particular game. Authors and editors alike are complicit.

If something is perceived to be hott stuff, both parties are motivated to get the finding published. First. Before those other guys. So...corners are occasionally cut. Authors and Editors both do this.

Rewarding the high risk behavior that leads to such retractions and frauds is not a good thing. While I think punishing proven fraudsters is important, it does not by any means go far enough.

We need to remove the positive reward environment. Look at it this way. If you intentionally fake data, or more likely subsets of the data, to get past that final review hurdle into a Cell acceptance, you are probably not very likely to get caught. If you are detected, it will often take years for this to come to light, particularly when it comes to a proven-beyond-doubt standard. In the mean time, you have enjoyed all the career benefits of that Glamour paper. Job offers for the postdocs. Grant awards for the PIs. Promotions. High $$ recruitment or retention packages. And generated even more Glam studies. So in the somewhat unlikely case of being busted for the original fake many of the beneficiaries, save the poor sucker nonWestern postdoc (who cannot be located), are able to defend and evade based on stature.

This gentleman's agreement to view faked results that happen to replicate as no-harm, no-foul is part of this process. It encourages faking and fraud. It should be stopped.

One more interesting part of this case. It was actually raised by the self-confessed cheater!

Yao-Yun Liang of the above article informed us, the Cell editors, that he manipulated the experiments to achieve predetermined results in Figures 2F, 2H, and 3G. The corresponding author of the paper, Xin-Hua Feng, has refuted the validity of Liang’s claims, citing concerns about Liang’s motives and credibility. In a continuing process, we have consulted with the authors, the corresponding author’s institution, and the Committee on Publication Ethics (COPE), and we have evaluated the available original data. The Committee on Scientific Integrity at the corresponding author’s institution, Baylor College of Medicine, conducted a preliminary inquiry that was inconclusive and recommended no further action. As the institution’s inquiry was inconclusive and it has been difficult to adjudicate the conflicting claims, we have provided the corresponding author an opportunity to arrange repetition of the experiments in question by independent labs.

Kind of reminiscent of the recent case where the trainee and lab head had counter claims against each other for a bit of fraudulent data, eh? I wonder if Liang was making a similar assertion to that of Dr. Cohn in the Mt. Sinai case, i.e., that the lab head created a culture of fraud or directly requested the fake? In the latter case, it looked like they probably only came down on the PI because of a smoking-gun email and the perceived credibility of the witnesses. Remember that ORI refused to take up the case so there probably was very little hard evidence on which to proceed. I'd bet that an inability to get beyond "he-said/he-said" is probably at the root of Baylor's "inconclusive" preliminary inquiry result for this Liang/Feng dispute.

33 responses so far

The many benefits of the LPU

A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified

...a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.

Sarewitz ends with an exhortation of sorts. To publish much less.

Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.

Interestingly, this "Price" he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes "an elitist".

Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.

Price was worried about "distinguished", but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the "merely good" scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol' Steve McKnight's complaints about the riff-raff to it.

Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.

In today's competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.

Investing more effort in fewer but 'more complete' publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (R. D. Vale Proc. Natl Acad. Sci. USA 112, 1343913446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics.

One Richard Ebright commented thusly:

Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of "minimum publishable units."

This is as wrong as wrong can be.

LPU: A Case Study

Let's take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as "This new street drug is 10,000 times more potent than morphine" [WaPo version].

Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.

In the delusional world of the "complete story" tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.

That is a whole lot of time, effort and money.

In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others' work in whole or in part.

Choices or assumptions made that lead to blind alleys will waste everyone's time equally.

Did I mention the funding yet?

Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:

1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]

2) There isn't anything new here. This is just a potent synthetic opiate, right? That's what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don't necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]

3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn't it be useful to know what they have found? Why IS it that W-18 is active in writhing...or can't this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn't have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]

Etcetera.

Science is iterative and collaborative.

It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.

Waiting while several groups pursue a supposed "complete story" in parallel only for one to "win" and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.

51 responses so far

JIF notes 2016

If it's late June, it must be time for the latest Journal Impact Factors to be announced. (Last year's notes are here.)

Nature Neuroscience is confirming its dominance over Neuron with upward and downward trends, respectively, widening the gap.

Biological Psychiatry continues to skyrocket, up to 11.2. All pretensions from Neuropsychopharmacology to keep pace are over, third straight year of declines for the ACNP journal lands it at 6.4. Looks like the 2011-2012 inflation was simply unsustainable for NPP. BP is getting it done though. No sign of a letup for the past 4 years. Nicely done BP and any of y'all who happen to have published there in the past half-decade.

I've been taking whacks at the Journal of Neuroscience all year so I almost feel like this is pile-on. But the long steady trend has dropped it below a 6, listed at 5.9 this year. Oy vey.

Looks like Addiction Biology has finally overreached with their JIF strategy. It jumped up to the 5.9 level 2012-2013 but couldn't sustain it- two consecutive years of declines lowers it to 4.5. Even worse, it has surrendered the top slot in the Substance Abuse category. As we know, this particular journal maintains an insanely long pre-print queue with some papers being assigned to print two whole calendar years after appearing online. Will anyone put up with this anymore, now that the JIF is declining and it isn't even the best-in-category anymore? I think this is not good for AB.

A number of journals in the JIF 4-6 category that I follow are holding steady over the past several years, that's good to see.

Probably the most striking observation is what appears to be a relatively consistent downward trend for JIF 2-4 journals that I watch. These were JIFs that have generally trended upward (slowly, slowly) from 2006 or so until the past couple of years. I assumed this was a reflection of more scientific articles being published and therefore more citations available. Perhaps this deflationary period is temporary. Or perhaps it reflects journals that I follow not keeping up with the times in terms of content?

As always, interested to hear what is going on with the journals in the fields you follow, folks. Have at it in the comments.

48 responses so far

Science magazine selects Jeremy Berg as EIC

May 25 2016 Published by under Science Publication, Scientific Publication

Well this is certainly exciting news!

Jeremy Berg, a biochemist and administrator at the University of Pittsburgh (Pitt) in Pennsylvania, will become the next editor-in-chief of Science magazine on 1 July. A former director of the National Institute of General Medical Sciences (NIGMS) who has a longstanding interest in science policy, Berg will succeed Marcia McNutt, who is stepping down to become president of the National Academy of Sciences.

I am a big fan of Jeremy Berg and his efforts to use data to drive policy when heading one of the NIH ICs and his efforts as a civilian to extract NIH grant data via FOIA requests for posting on his DataHound blog.

I look forward to a new era at Science magazine with an EIC who prefers that institutions make their decisions based on data and that they be as transparent as possible about their processes.

15 responses so far

More Neuroscience Smack

JNeuroJIF2014

I select these journals for comparison for a reason, of course. First, I'm in the addiction fields and Addiction Biology tops the JIF list of ISI Journal Citation Reports for the subcategory of Substance Abuse. Second, Biological Psychiatry and Neuropsychopharmacology publish a lot of behavioral pharmacology, another superset under which my work falls

The timeline is one of convenience, do note that I was in graduate school long before this.

When I entered graduate school, it was clear that publishing in the Journal of Neuroscience was considered something special. All the people presenting work from the platform at the Annual Meeting of the SfN were publishing relentlessly in JNeuro. People with posters drawing a crowd five people deep and spilling over the adjacent posters in an arc? Ditto.

I was in graduate school to study behavior, first, and something about the way the body accomplished these cool tasks second. This is still pretty much true, btw. For various reasons, I oriented toward the chemical communication and information transmission processes of the brain as my favored level of analysis. In short, I became a behavioral pharmacology person in orientation.

In behavioral pharmacology, the specificity of the analysis depends on three overarching factors. First, the components of the nervous system which respond to given drug molecules. Second, the specificity with which any given exogenous drug manipulation may act. Third, the regional constraints under which the drug manipulation is applied. By the time I entered graduate school, the scope of manipulations were relatively well developed. Sure, not all tools ended up having exactly the specificity that they were assumed to have. New receptor and transporter and intracellular chemical recognition sites were discovered frequently. Still are. But on the whole, we knew a lot about the interpretive space within which new experiments were being conducted.

I contrast this with lesion work. Because at the time I was in graduate school, there was another level of analysis that was also popular- the brain lesion. This related to a set of techniques in which regions of the brain were surgically deactivated/removed as the primary manipulation. The interpretive space tended to include fierce debate over the specificity with which the lesion had been produced. The physical area removed was rarely consistent in extent even within one study. Different approaches to the target might entail various collateral damages that were essentially ignored within a paper. The regions that were ablated contained, of course, a multitude of neuronal and glial subtypes and occasionally axonal tracts that were just passing through the neighborhood. Specificity was, in a word, poor.

I noticed very early in my days of grinding reading of my areas of interest that the Journal of Neuroscience just LOOOOOOOVED them a lesion study. And absolutely hated behavioral pharmacology.

I was, for a time, dismayed.

I couldn't believe it. The relative level of confidence in the claims versus the experimental evidence was ridiculously poor for lesions versus pharmacology. The designs were less comprehensive and less well controlled. The inconvenient bits of evidence provided early were entirely forgotten in a later rush to claim lesion/behavior impairment specificity. The rapid fire exchange of data in publications from the competing labs was exciting but really pointed out the flaws in the whole premise.

At the very least, you could trade one level of uncertainty of the behavioral pharmacology for an equally troublesome uncertainty in the lesion world.

It boggled my mind that one of these technique domains and levels of analysis was considered The Awesome for the flagship journal of the very prestigious and large Society for Neuroscience and the other was considered unworthy*.

Particularly when I would see the broad stretch of interpretive domains that enjoyed space and an audience at the Annual Meeting of the Society for Neuroscience. It did not escape my attention that the SfN was delighted to take dues and Annual Meeting fees from people conducting a whole host of neuroscience investigations (far, far beyond the subject of this post, btw. I have another whole rant on the topic of the behavioral specificity and lack thereof.) that would never be considered for publication in J Neuro on a categorical basis.

It has been a long time since my dawning realization of these issues and I have survived just fine, so far, doing the things that interest me in science. I may have published work once or twice in J Neuro but I generally do not, and can not. They are still no fans of what I think is most interesting in science.

It turns out that journals that are fans of behavioral pharmacology, see Figure above, do publish some of the stuff that I think is most interesting. They are accepting of levels of analysis that are most interesting to me, in addition to considerable overlap with the J Neuro-acceptable analyses of the present day. And as time has gone by, the JIF of these journals has risen while that of J Neuro has fallen. Debate the reasons for this as you like, we all know there are games to be played to change the JIF calculation. But ultimately, papers are cited or not and this has a role in driving the JIF.

I watch the JIF numbers for a whole host of journals that publish a lot more pedestrian work than these journals do as well. The vast majority are on slight upward trends. More science is being published and more citations are available for distribution, so this makes a lot of sense.

J Neuro tends to stand out as the only one on a long and steady downward trend.

If J Neuro doesn't halt this slide, it will end up down in the weeds of the 3-5 JIF range pretty soon. It will have a LOT more company down there. And it's pretensions to being the venue for the very best neuroscience work will be utterly over.

I confess I am a little bit sad about this. It is very hard to escape the imprinting of my undergraduate and graduate school education years. Not too sad, mind you, I definitely enjoy the schadenfreude of their demise.

But I am a little sad. This Journal is supposed to be awesome in my mind. It still publishes a lot of good stuff. And it deserves a lot of credit for breaking the Supplemental Materials cycle a few years ago. I still like the breadth and excitement of the SfN Annual Meeting which gives me a related warm fuzzy for the Journal.

But still. If they go down they have nothing but themselves to blame. And I'm okay being the natterer who gets to sneer that he told em so.

__
*There is an argument to be made, one that is made by many, that the real problem at J Neuro is not the topic domains, per se, but rather a broader issue of the insider club that runs SfN and therefore the Journal**. I am not sure I really care about this too much because the result is the same.

**One might observe that publications which appear to be exceptions to the technique-domain rules usually come with insider-club authors.

29 responses so far

Complete stories, demonstrations of mechanism and other embarrassing fictions

There's a new post up at The Ideal Observer.

Many times you find people talking about how many papers a scientist has published, but does anyone seriously think that that is a useful number? One major factor is that individual researchers and communities have dramatically different ideas about what constitutes a publication unit.

Go read and comment.

No responses yet

Older posts »