Archive for the 'Science Publication' category

Self plagiarism

A journal has recently retracted an article for self-plagiarism:

Just going by the titles this may appear to be the case where review or theory material is published over and over in multiple venues.

I may have complained on the blog once or twice about people in my fields of interest that publish review after thinly updated review year after year.

I've seen one or two people use this strategy, in addition to a high rate of primary research articles, to blanket the world with their theoretical orientations.

I've seen a small cottage industry do the "more reviews than data articles" strategy for decades in an attempt to budge the needle on a therapeutic modality that shows promise but lacks full financial support from, eg NIH.

I still don't believe "self-plagiarism" is a thing. To me plagiarism is stealing someone else's ideas or work and passing them off as one's own. When art critics see themes from prior work being perfected or included or echoed in the masterpiece, do they scream "plagiarism"? No. But if someone else does it, that is viewed as copying. And lesser. I see academic theoretical and even interpretive work in this vein*.

To my mind the publishing industry has a financial interest in this conflation because they are interested in novel contributions that will presumably garner attention and citations. Work that is duplicative may be seen as lesser because it divides up citation to the core ideas across multiple reviews. Given how the scientific publishing industry leeches off content providers, my sympathies are.....limited.

The complaint from within the house of science, I suspect, derives from a position of publishing fairness? That some dude shouldn't benefit from constantly recycling the same arguments over and over? I'm sort of sympathetic to this.

But I think it is a mistake to give in to the slippery slope of letting the publishing industry establish this concept of "self-plagiarism". The risk for normal science pubs that repeat methods are too high. The risks for "replication crisis" solutions are too high- after all, a substantial replication study would require duplicative Introductory and interpretive comment, would it not?

__

*although "copying" is perhaps unfair and inaccurate when it comes to the incremental building of scientific knowledge as a collaborative endeavor.

8 responses so far

Citing Preprints

In my career I have cited many non-peer-reviewed sources within my academic papers. Off the top of my head this has included:

  1. Government reports
  2. NGO reports
  3. Longitudinal studies
  4. Newspaper items
  5. Magazine articles
  6. Television programs
  7. Personal communications

I am aware of at least one journal that suggests that "personal communications" should be formatted in the reference list just like any other reference, instead of the usual parenthetical comment.

It is much, much less common now but it was not that long ago that I would run into a citation of a meeting abstract with some frequency.

The entire point of citation in a scientific paper is to guide the reader to an item from which they can draw their own conclusions and satisfy their own curiosity. One expects, without having to spell it out each and every time, that a citation of a show on ABC has a certain quality to it that is readily interpreted by the reader. Interpreted as different from a primary research report or a news item in the Washington Post.

Many fellow scientists also make a big deal out of their ability to suss out the quality of primary research reports merely by the place in which it was published. Maybe even by the lab that published it.

And yet.

Despite all of this, I have seen more than one reviewer objection to citing a preprint item that has been published in bioRxiv.

As if it is somehow misleading the reader.

How can all these above mentioned things be true, be such an expectation of reader engagement that we barely even mention it but whooooOOOAAAA!

All of a sudden the citation of a preprint is somehow unbelievably confusing to the reader and shouldn't be allowed.

I really love* the illogical minds of scientists at times.

26 responses so far

Time to N-up!

May 02 2018 Published by under Science Publication, Scientific Publication

Chatter on the Twitts today brought my attention to a paper by Weber and colleagues that had a rather startlingly honest admission.

Weber F, Hoang Do JP, Chung S, Beier KT, Bikov M, Saffari Doost M, Dan Y.Regulation of REM and Non-REM Sleep by Periaqueductal GABAergic Neurons. Nat Commun. 2018 Jan 24;9(1):354. doi: 10.1038/s41467-017-02765-w.

If you page all the way down to the end of the Methods of this paper, you will find a statement on sample size determination. I took a brief stab at trying to find the author guidelines for Nature Communications because a standalone statement of how sample size was arrived upon is somewhat unusual to me. Not that I object, I just don't find this to be common in the journal articles that I read. I was unable to locate it quickly so..moving along to the main point of the day. The statement reads partially:

Sample sizes

For optogenetic activation experiments, cell-type-specific ablation experiments, and in vivo recordings (optrode recordings and calcium imaging), we continuously increased the number of animals until statistical significance was reached to support our conclusions.

Wow. WOW!

This flies in the face of everything I have ever understood about proper research design. In the ResearchDesign 101 approach, you determine* your ideal sample size in advance. You collect your data in essentially one go and then you conduct your analysis. You then draw your conclusions about whether the collected data support, or fail to support, rejection of a null hypothesis. This can then allow you to infer things about the hypothesis that is under investigation.

In the real world, we modify this a bit. And what I am musing today is why some of the ways that we stray from ResearchDesign orthodoxy are okay and some are not.

We talk colloquially about finding support for (or against) the hypothesis under investigation. We then proceed to discuss the results in terms of whether they tend to support a given interpretation of the state of the world or a different interpretation. We draw our conclusions from the available evidence- from our study and from related prior work. We are not, I would argue, supposed to be setting out to find the data that "support our conclusions" as mentioned above. It's a small thing and may simply reflect poor expression of the idea. Or it could be an accurate reflection that these authors really set out to do experiments until the right support for a priori conclusions has been obtained. This, you will recognize, is my central problem with people who say that they "storyboard" their papers. It sounds like a recipe for seeking support, rather than drawing conclusions. This way lies data fakery and fraud.

We also, importantly, make the best of partially successful experiments. We may conclude that there was such a technical flaw in the conduct of the experiment that it is not a good test of the null hypothesis. And essentially treat it in the Discussion section as inconclusive rather than a good test of the null hypothesis.

One of those technical flaws may be the failure to collect the ideal sample size, again as determined in advance*. So what do we do?

So one approach is simply to repeat the experiment correctly. To scrap all the prior data, put fixes in place to address the reasons for the technical failure, and run the experiment again. Even if the technical failure hit only a part of the experiment. If it affected only some of the "in vivo recordings", for example. Orthodox design mavens may say it is only kosher to re run the whole shebang.

In the real world, we often have scenarios where we attempt to replace the flawed data and combine it with the good data to achieve our target sample size. This appears to be more or less the space in which this paper is operating.

"N-up". Adding more replicates (cells, subjects, what have you) until you reach the desired target. Now, I would argue that re-running the experiment with the goal of reaching the target N that you determined in advance* is not that bad. It's the target. It's the goal of the experiment. Who cares if you messed up half of them every time you tried to run the experiment? Where "messed up" is some sort of defined technical failure rather than an outcome you don't like, I rush to emphasize!

On the other hand, if you are spamming out low-replicate "experiments" until one of the scenarios "looks promising", i.e. looks to support your desired conclusions, and selectively "n-up" that particular experiment, well this seems over the line to me. It is much more likely to result in false positives. Well, I suppose running all of these trial experiments at the full power is just as likely it is just that you are not able to do as many trial experiments at full power. So I would argue the sheer number of potential experiments is greater for the low-replicate, n-up-if-promising approach.

These authors appear to have done this strategy even one worse. Because their target is not just an a priori determined sample size to be achieved only when the pilot "looks promising". In this case they take the additional step of only running replicates up to the point where they reach statistical significance. And this seems like an additional way to get an extra helping of false-positive results to me.

Anyway, you can google up information on false positive rates and p-hacking and all that to convince yourself of the math. I was more interested in trying to probe why I got such a visceral feeling that this was not okay. Even if I personally think it is okay to re-run an experiment and combine replicates (subjects in my case) to reach the a priori sample size if it blows up and you have technical failure on half of the data.

__
*I believe the proper manner for determining sample size is entirely apart from the error the authors have admitted to here. This isn't about failing to complete a power analysis or the like.

27 responses so far

Ludicrous academics for $200, Alex

Just when I think I will not find any more ridiculous things hiding in academia.....

A recent thread on twitter addressed a population of academics (not sure if it was science) who are distressed when the peer review of their manuscripts is insufficiently vigorous/critical.

This is totally outside of my experience. I can't imagine ever complaining to an Editor of a journal that the review was too soft after getting an accept or invitation to revise.

People are weird though.

5 responses so far

Question of the Day

How do you assess whether you are too biased about a professional colleague and/or their work?

In the sense that you would self-elect out of reviewing either their manuscripts for publication or their grant applications.

Does your threshold differ for papers versus grants?

Do you distinguish between antipathy bias and sympathy bias?

8 responses so far

Creative artists and the writing of scientific manuscripts

I am a consumer of the creative arts and, really, have always been in awe of creative artists. Looking back chronologically over my lifetime, my greatest consumption and appreciation has been fiction writing, music and cartooning (particularly the political variety). I'm not a big fan of flat art (sculpture speaks to me much more) but I am definitely amazed by what some people can paint, draw and the like. I do like moving picture arts but I don't think I have any particular sense of awe for them as a craft and certainly not for the participants as creative artists*. I get that others can see this, however.

Anyway, the creative artists are amazing to me.

A couple of days ago it occurred to me that understanding the process of creative arts might help cross what I find to be a somewhat frustrating bridge in training other people to write scientific manuscripts.

Sidebar: I am pretty sure we've discussed related topics before on the blog, but I can't remember when so I'm probably going to repeat myself.

When I first started to write scientific manuscripts I quite reasonably suffered the misunderstanding that you sort of did the experiments you planned and then wrote them (all of them) up in chronological order and badda boom, published it somewhere. That is because, I assume, many scientific manuscripts read as if that is how they were created. And there are probably some aspects of "Research Design 101" instruction that convinces young scientists that this is the way things work.

Then, when it is your own work, there are two additional factors that press down and shape your writing process. First, a sense of both pride and entitlement for your effort which tells your brain that surely every damn thing you worked on needs to fuel a publication. Second, a sense that writing is hard and you want to know in advance exactly what to write so that no effort is wasted.

"Wasted".

And this is where the creative arts come in.

Now, I've never lived cheek by jowl with a creative artist and I am only superficially familiar with what they do. But I am pretty convinced it is an iterative, inefficient process. Flat art folks seem to sketch. A lot. They work on an eye. An expression. A composition. A leg. Apple. Pair of shoes. Novelists and short story authors work on themes. characters. plot elements. They write and tear their hair out. Some of this is developing skill, sure, but much of this for a reasonably mature creative person is just working the job. They create copious amounts of material that is only leading up to the final product.

And the final product, I surmise, is built from the practice elements. A plot or character for a story. A curve of a mouth for a portrait. Melody. Chord progressions. A painted sunbeam. The artist starts stitching together a complete work out of elements.

I think you need to get into this mode as a scientist who is writing up manuscripts.

We stitch together a work out of elements as well. Now in our case, the elements are not made up. They are data. That we've collected. And we spend a heck of a lot of time on the quality of those elements. But eventually, we need to tell a story from those parts.

N.b. This is not storyboarding. Storyboarding is setting out the story you want to tell and then later going out and creating the elements (aka, figures) that you need to tell this particular story. That way lies fraud.

The creative process is looking at the elements of truth that you have available to you, from your labors to create good data, and then trying to see how they fit together into a story.

The transition that one has to make as a scientist is the ability to work with the elements, put in serious labor trying to fit them together, and then being willing to scrap the effort and start over. I think that if you don't get in there and do the work writing, writing, writing and analyzing and considering what the data are telling you, you make less progress.

Because the alternative is paralyzing. The alternative is that you keep putting off the creative process until something tells you how to write "efficiently". Maybe it is that you are waiting for just the right experimental result to clarify a murky situation. Maybe you are waiting for your PI or collaborator or fellow trainee to tell you what to do, what to write, how to structure the paper.

I suppose it may look like this to a relatively inexperienced writer of manuscripts? That its a bit daunting and that if only the PI would say the right words that somehow it would be magically easy to "efficiently" write up the paper in the right way that she expects?

When I hear generic muttering from trainees about frustration with insufficient feedback from a mentor I sometimes wonder if this is the problem. An over expectation of specific direction on what to write, how to write and what the story is.

The PI, of course, wants the trainee to take their own shot at telling the story. Whereupon they will promptly red pen the hell out of all that "work" and tell the trainee to rewrite most of it and take a totally different tack. Oh, and run these two more experiments. And then the trainee wonders "why didn't my PI tell me what she wanted in the first place instead of wasting my time??? GAh, I have the worst possible mentor!"

I realized within the past year or so that I have the same problem that I have criticized on the blog for years now. I tell new professors that they need to get away from the bench as quickly as possible and that this is not their job anymore. I tell them they have to find a way to get productivity out of their staff and that doing experiments is not their job anymore. I never had this problem as a transitioning scientist...I was fine getting away from the bench**.

But my equivalent is data analysis. And I'm not talking high falutin' stuff that only I can do, either. I want to see the data! Study by study. As it rolls in, even. I want to examine it, roll it around in it. Create graphs and run some stats. Think about what it means and how it fits into my developing understanding of a research direction in our laboratory. I can't wait to think about how this new figure might fit into one of our ongoing creative works...i.e., a manuscript.

I cannot give it up.

I create a lot of sketches, half plotted stories and cartoon panels. Elements. Themes. Drafts.

Many of these will never go into any published manuscript. If lucky some of these building blocks will make their way into a slide presentation or a into a grant as preliminary data. I never feel as though the effort is wasted, however. Making these bits and pieces is, to me, what allows me to get from here to there. From blank page to published manuscript.

Ideally, as I am supposedly training people to become independent scientists, I would like to train them to do this in the way that I do. And to get there, I have to get them across the hurdle of the creative artist. I have to get them to see that just rolling up your sleeves and doing the work is a necessary part of the process. You cannot be told a route, or receive a Revelation, that makes the process of creating a scientific manuscript efficient. You have to work on the elements. Make the sketches. Flesh out the plotlines.

And then be willing to scrap a bunch of "work" because it is not helping you create the final piece.

__
*I have a friend that is behind the camera on teevee shows. Big name teevee shows that you've heard of and watch. I see his work and I'm not really Seeing. His. Work. But this guy casually takes a few vacation pictures and I'm amazed at his eye, composition, etc. He doesn't seem to even consider himself a still camera artist, acts like he considers himself barely a hobbyist at that! So clearly I'm missing something about moving picture photography.

**I'm not actually a bench scientist, the ~equivalent.

8 responses so far

NIH encourages pre-prints

In March of 2017 the NIH issued a notice on Reporting Preprints and Other Interim Research Products (NOT-OD-17-050): "The NIH encourages investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work.".

The key bits:

Interim Research Products are complete, public research products that are not final.

A common form is the preprint, which is a complete and public draft of a scientific document. Preprints are typically unreviewed manuscripts written in the style of a peer-reviewed journal article. Scientists issue preprints to speed dissemination, establish priority, obtain feedback, and offset publication bias.

Another common type of interim product is a preregistered protocol, where a scientist publicly declares key elements of their research protocol in advance. Preregistration can help scientists enhance the rigor of their work.

I am still not happy about the reason this happened (i.e., Glam hounds trying to assert scientific priority in the face of the Glam Chase disaster they themselves created) but this is now totally beside the point.

The NIH policy (see OpenMike blog entry for more) has several implications for grant seekers and grant holders which are what form the critical information for your consideration, Dear Reader.

I will limit myself here to materials that are related to standard paper publishing. There are also implications for materials that would never be published (computer code?) but that is beyond the scope for today's discussion.

At this point I will direct you to bioRxiv and PsyRxiv if you are unfamiliar with some of the more popular approaches for pre-print publication of research manuscripts.

The advantages to depositing your manuscripts in a pre-print form are all about priority and productivity, in my totally not humble opinion. The former is why the Glamour folks are all a-lather but priority and scooping affect all of us a little differently. As most of you know, scooping and priority is not a huge part of my professional life but all things equal, it's better to get your priority on record. In some areas of science it is career making/breaking and grant getting/rejecting to establish scientific priority. So if this is a thing for your life, this new policy allows and encourages you to take advantage.

I'm more focused on productivity. First, this is an advantage for trainees. We've discussed the tendency of new scientists to list manuscripts "in preparation" on their CV or Biosketch (for fellowship applications, say, despite it being technically illegal). This designation is hard to evaluate. A nearing-defense grad student who has three "in prep" manuscripts listed on the CV can appear to be bullshitting you. I always caution people that if they list such things they had better be prepared to send a prospective post-doc supervisor a mostly-complete draft. Well, now the pre-print allows anyone to post "in preparation" drafts so that anyone can verify the status. Very helpful for graduate students who have a short timeline versus the all too typical cycle of submission/rejection/resubmission/revision, etc. More importantly, the NIH previously frowned on listing "in preparation" or "in review" items on the Biosketch. This was never going to result in an application being returned unreviewed but it could sour the reviewers. And of course any rule followers out there would simply not list any such items, even if there was a minor revision being considered. With pre-print deposition and the ability to list on a NIH biosketch and cite in the Research Plan there is no longer any vaporware type of situation. The reviewer can look at the pre-print and judge the science for herself.

This applies to junior PIs as well. Most likely, junior PIs will have fewer publications, particularly from their brand new startup labs. The ability of the PI to generate data from her new independent lab can be a key issue in grant review. As with the trainee, the cycle of manuscript review and acceptance is lengthy compared with the typical tenure clock. And of course many junior PIs are trying to balance JIF/Glam against this evidence of independent productivity. So pre-print deposition helps here.

A very similar situation can apply to us not-so-junior PIs who are proposing research in a new direction. Sure, there is room for preliminary data in a grant application but the ability to submit data in manuscript format to the bioRxiv or some such is unlimited! Awesome, right?

15 responses so far

Repost: Why aren't they citing my papers?

Jul 07 2016 Published by under Science Publication

As the Impact Factor discussion has been percolating along (Stephen Curry, Björn Brembs, YHN) it has touched briefly on the core valuation of a scientific paper: Citations!

Coincidentally, a couple of twitter remarks today also reinforced the idea that what we are all really after is other people who cite our work.
Dr24hrs:

More people should cite my papers.

I totally agree. More people should cite my papers. Often.

AmasianV:

was a bit discouraged when a few papers were pub'ed recently that conceivably could have cited mine

Yep. I've had that feeling on occasion and it stings. Especially early in the career when you have relatively few publications to your name, it can feel like you haven't really arrived yet until people are citing your work.

Before we get too far into this discussion, let us all pause and remember that all of the specifics of citation numbers, citation speed and citation practices are going to be very subfield dependent. Sometimes our best discussions are enhanced by dissecting these differences but let's try not to act like nobody recognizes this, even though I'm going to do so for the balance of the post....

So, why might you not be getting cited and what can you do about it? (in no particular order)

1) Time. I dealt with this in a prior post on gaming the impact factor by having a lengthy pre-publication queue. The fact of the matter is that it takes a long time for a study that is primarily motivated by your paper to reach publication. As in, several years of time. So be patient.

2) Time (b). As pointed out by Odyssey, sometimes a paper that just appeared reached final draft status 1, 2 or more years ago and the authors have been fighting the publication process ever since. Sure, occasionally they'll slip in a few new references when revising for yet the umpteenth time but this is limited.

3) Your paper doesn't hit the sweet spot. Speaking for myself, my citation practices lean this way for any given point I'm trying to make. The first, best and most recent. Rationale's vary and I would assume most of us can agree that the best, most comprehensive, most elegant and all around most scientifically awesome study is the primary citation. Opinions might vary on primacy but there is a profound sub-current that we must respect the first person to publish something. The most-recent is a nebulous concept because it is a moving target and might have little to do with scientific quality. But all else equal, the more recent citations should give the reader access to the front of the citation thread for the whole body of work. These three concerns are not etched in stone but they inform my citation practices substantially.

4) Journal identity. I don't need to belabor this but suffice it to say some people cite based on the journal identity. This includes Impact Factor, citing papers on the journal to which one is submitting, citing journals thought important to the field, etc. If you didn't happen to publish there but someone else did, you might be passed over.

5) Your paper actually sucks. Look, if you continually fail to get cited when you think you should have been mentioned, maybe your paper(s) just sucks. It is worth considering this. Not to contribute to Imposter Syndrome but if the field is telling you to up your game...up your game.

6) The other authors think your paper sucks (but it doesn't). Water off a duck's back, my friends. We all have our opinions about what makes for a good paper. What is interesting and what is not. That's just the way it goes sometimes. Keep publishing.

7) Nobody knows you, your lab, etc. I know I talk about how anyone can find any paper in PubMed but we all need to remember this is a social business. Scientists cite people they know well, people they've just been chatting with at a poster session and people who have just visited for Departmental seminar. Your work is going to be cited more by people for whom you/it/your lab are most salient. Obviously, you can do something about this factor...get more visible!

8) Shenanigans (a): Sometimes the findings in your paper are, shall we say, inconvenient to the story the authors wish to tell about their data. Either they find it hard to fit it in (even though it is obvious to you) or they realize it compromises the story they wish to advance. Obviously this spans the spectrum from essentially benign to active misrepresentation. Can you really tell which it is? Worth getting angsty about? Rarely.....

9) Shenanigans (b): Sometimes people are motivated to screw you or your lab in some way. They may feel in competition with you and, nothing personal but they don't want to extend any more credit to you than they have to. It happens, it is real. If you cite someone, then the person reading your paper might cite them. If you don't, hey, maybe that person will miss it. Over time, this all contributes to reputation. Other times, you may be on the butt end of disagreements that took place years before. Maybe two people trained in a lab together 30 years ago and still hate each other. Maybe someone scooped someone back in the 80s. Maybe they perceived that a recent paper from your laboratory should have cited them and this is payback time.

10) Nobody knows you, your lab, etc II, electric boogaloo. Cite your own papers. Liberally. The natural way papers come to the attention of the right people is by pulling the threads. Read one paper and then collect all the cited works of interest. Read them and collect the works cited in that paper. Repeat. This is the essence of graduate school if you ask me. And it is a staple behavior of any decent scientist. You pull the threads. So consequently, you need to include all the thread-ends in as many of your own papers as possible. If you don't, why should anyone else? Who else is most motivated to cite your work? Who is most likely to be working on related studies? And if you can't find a place for a citation....

16 responses so far

The many benefits of the LPU

A Daniel Sarewitz wrote an opinion piece in Nature awhile back to argue that the pressure to publish regularly has driven down the quality of science. Moreover, he claims to have identified

...a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.

Sarewitz ends with an exhortation of sorts. To publish much less.

Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.

Interestingly, this "Price" he seemingly follows in thought wrote in 1963, long before modern search engines were remotely conceivable and was, as Sarewitz himself observes "an elitist".

Within a couple of generations, [Derek de Solla Price] said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.

Price was worried about "distinguished", but Sarewitz has adopted this elitism to claim that pressure to publish is actually causing the promotion of mistaken or bad science. And so we should all, I surmise, slow down and publish less. It is unclear what Sarewitz thinks about the "merely good" scientists identified by Price and whether they should be driven away or not. If not explicitly stated, this piece does have a whiff of ol' Steve McKnight's complaints about the riff-raff to it.

Gary S. McDowell and Jessica K. Polka wrote in to observe that slowing the pace of publication is likely to hurt younger scientists who are trying to establish themselves.

In today's competitive arena, asking this of scientists — particularly junior ones — is to ask them to fall on their swords.

Investing more effort in fewer but 'more complete' publications could hold back early-career researchers, who already face fierce competition. To generate a first-author publication, graduate students on average take more than a year longer than they did in the 1980s (R. D. Vale Proc. Natl Acad. Sci. USA 112, 1343913446; 2015). Introducing further delays for junior scientists is not an option as long as performance is rated by publication metrics.

One Richard Ebright commented thusly:

Wrong. All publications, by all researchers, at all career stages, should be complete stories. No-one benefits from publication of "minimum publishable units."

This is as wrong as wrong can be.

LPU: A Case Study

Let's take the case of W-18. It hit the mainstream media following identification in a few human drug overdose cases as "This new street drug is 10,000 times more potent than morphine" [WaPo version].

Obviously, this is a case for the pharmacological and substance abuse sciences to leap into action and provide some straight dope, er information, on the situation.

In the delusional world of the "complete story" tellers, this should be accomplished by single labs or groups, beavering away in isolation, not to report their findings on W-18 until they have it all. That might incorporate wide ranging in vitro pharmacology to describe activity or inactivity at major suspected sites of action. Pharmacokinetic data in one small and at least one large experimental species, maybe some human if possible. Behavioral pharmacology on a host of the usual assays for dose-effects, toxicity, behavioral domains, dependence, withdrawal, reward or drug liking, liability for compulsive use patterns, cognitive impact with chronic use. This list goes on. And for each in vivo measure, we may need to parse the contribution of several signalling systems that might be identified by the in vitro work.

That is a whole lot of time, effort and money.

In the world of the complete-story tellers, these might be going on in parallel in multiple lab groups who are duplicating each others' work in whole or in part.

Choices or assumptions made that lead to blind alleys will waste everyone's time equally.

Did I mention the funding yet?

Ah yes, the funding. Of course a full bore effort on this requires a modern research lab to have the cash to conduct the work. Sometimes it can be squeezed alongside existing projects or initial efforts excused in the pursuit of Preliminary Data. But at some point, people are going to have to propose grants. Which are are going to take fire for a lack of evidence that:

1) There is any such thing as a W-18 problem. Media? pfah, everyone knows they overblow everything. [This is where even the tiniest least publishable unit from epidemiologists, drug toxicologists or even Case Reports from Emergency Department physicians goes a loooooong way. And not just for grant skeptics. Any PI should consider whether a putative new human health risk is worth pouring effort and lab resources into. LPU can help that PI to judge and, if warranted, to defend a research proposal.]

2) There isn't anything new here. This is just a potent synthetic opiate, right? That's what the media headlines claim. [Except it is based on a the patent description of a mouse writhing task. We have NO idea if it is even active at endogenous opiate receptors from the media and the patent. And hey! Guess what? The UNC drug evaluation core run by Bryan Roth found no freaking mu opioid receptor or delta or kappa opioid receptor activity for W-18!!! Twitter is a pretty LPU venue. And yet think of how much work this saves. It will potentially head off a lot of mistaken assaying looking for opioid activity across all kinds of lab types. After all, the above listed logical progression is not what happens. People don't necessarily wait for comprehensive in vitro pharmacology to be available before trying out their favored behavioral assay.]

3) Whoa, totally unexpected turn for W-18 already. So what next? [Well, it would be nice if there were Case Reports of toxic effects eh? To point us in the right direction- are their hints of the systems that are affected in medical emergency cases? And if some investigators had launched some pilot experiments in their own favored domains before finding out the results from Roth, wouldn't it be useful to know what they have found? Why IS it that W-18 is active in writhing...or can't this patent claim be replicated? Is there an active metabolite formed? This obviously wouldn't have come up in the UNC assays as they just focus on the parent compound in the in vitro assays.]

Etcetera.

Science is iterative and collaborative.

It generates knowledge best and with the most efficiency when people are aware of what their peers are finding out as quickly as possible.

Waiting while several groups pursue a supposed "complete story" in parallel only for one to "win" and be able to publish while the other ones shake their scooped heads in shame and fail to publish such mediocrity is BAD SCIENCE.

51 responses so far

JIF notes 2016

If it's late June, it must be time for the latest Journal Impact Factors to be announced. (Last year's notes are here.)

Nature Neuroscience is confirming its dominance over Neuron with upward and downward trends, respectively, widening the gap.

Biological Psychiatry continues to skyrocket, up to 11.2. All pretensions from Neuropsychopharmacology to keep pace are over, third straight year of declines for the ACNP journal lands it at 6.4. Looks like the 2011-2012 inflation was simply unsustainable for NPP. BP is getting it done though. No sign of a letup for the past 4 years. Nicely done BP and any of y'all who happen to have published there in the past half-decade.

I've been taking whacks at the Journal of Neuroscience all year so I almost feel like this is pile-on. But the long steady trend has dropped it below a 6, listed at 5.9 this year. Oy vey.

Looks like Addiction Biology has finally overreached with their JIF strategy. It jumped up to the 5.9 level 2012-2013 but couldn't sustain it- two consecutive years of declines lowers it to 4.5. Even worse, it has surrendered the top slot in the Substance Abuse category. As we know, this particular journal maintains an insanely long pre-print queue with some papers being assigned to print two whole calendar years after appearing online. Will anyone put up with this anymore, now that the JIF is declining and it isn't even the best-in-category anymore? I think this is not good for AB.

A number of journals in the JIF 4-6 category that I follow are holding steady over the past several years, that's good to see.

Probably the most striking observation is what appears to be a relatively consistent downward trend for JIF 2-4 journals that I watch. These were JIFs that have generally trended upward (slowly, slowly) from 2006 or so until the past couple of years. I assumed this was a reflection of more scientific articles being published and therefore more citations available. Perhaps this deflationary period is temporary. Or perhaps it reflects journals that I follow not keeping up with the times in terms of content?

As always, interested to hear what is going on with the journals in the fields you follow, folks. Have at it in the comments.

48 responses so far

Older posts »