Archive for the 'Impact Factor' category

H-index

I can't think of a time when seeing someone's h-index created a discordant view of their impact. Or for that matter when reviewing someones annual cites was surprising.

I just think the Gestalt impression you generate about a scientist is going to correlate with most quantification measures.

Unless there are weird outliers I suppose. But is there is something peculiar about a given scientist's publications that skews one particular measure of awesomeness....wouldn't someone being presented that measure discount accordingly?

Like if a h-index was boosted by a host of middle author contributions to a much more highly cited domain than the one most people associate you with? That sort of thing.

9 responses so far

The "whole point" of Supplementary Data

Dec 10 2014 Published by under Impact Factor, Scientific Publication

Our good blog friend DJMH offered up the following on a post by Odyssey:
Because the whole point of supplemental material is that the publisher doesn't want to spend a dime supporting it

This is nonsense. This is not "the whole point". This is peripheral to the real point.

In point of fact, the real reason GlamourMags demand endless amounts of supplementary data is to squeeze out the competition journals. They do this by denying those other journals the data that would otherwise be offered up as additional publications. Don't believe it? Take a look through some issues of Science and Nature from the late 1960s through maybe the mid 1970s. The research publications were barely Brief Communications. A single figure, maybe two. And no associated "Supplemental Materials", either. And then, if you are clever, you will find the real paper that was subsequently published in a totally different journal. A real journal. With all of the meat of the study that was promised by the teaser in the Glam Mag fleshed out.

Glamour wised up and figured out that with the "Supplementary Materials" scam they can lock up the data that used to be put in another journal. This has the effect of both damping citations of that specific material and collecting what citations there are to themselves. All without having to treble or quadruple the size of their print journal.

Nice little scam to increase their Journal Impact Factor distance from the competition.

19 responses so far

Wait...the new Biosketch is supposed to be an antiGlamour measure? HAHAHHAHHA!!!!!

A tweet from @babs_mph sent me back to an older thread where Rockey introduced the new Biosketch concept. One "Senior investigator" commented:

For those who wonder where this idea came from, please see the commentary by Deputy Director Tabak and Director Collins (Nature 505, 612–613, January 2014) on the issue of the reproducibility of results. One part of the commentary suggests that scientists may be tempted to overstate conclusions in order to get papers published in high profile journals. The commentary adds “NIH is contemplating modifying the format of its ‘biographical sketch’ form, which grant applicants are required to complete, to emphasize the significance of advances resulting from work in which the applicant participated, and to delineate the part played by the applicant. Other organizations such as the Howard Hughes Medical Institute have used this format and found it more revealing of actual contributions to science than the traditional list of unannotated publications.”

Here's Collins and Tabak, 2014 in freely available PMC format. The lead in to the above referenced passage is:

Perhaps the most vexed issue is the academic incentive system. It currently overemphasizes publishing in high-profile journals. No doubt worsened by current budgetary woes, this encourages rapid submission of research findings to the detriment of careful replication. To address this, the NIH is contemplating...

Hmmm. So by changing this, the ability on grant applications to say something like:

"Yeah, we got totally scooped out of a Nature paper because we didn't rush some data out before it was ready but look, our much better paper that came out in our society journal 18 mo later was really the seminal discovery, we swear. So even though the entire world gives primary credit to our scoopers, you should give us this grant now."

is supposed to totally alter the dynamics of the "vexed issue" of the academic incentive system.

Right guys. Right.

13 responses so far

BJP pulls a neat little self-citation trick

Sep 24 2013 Published by under Academics, Impact Factor

As far as I can tell, the British Journal of Pharmacology has taken to requiring that authors who use animal subjects conduct their studies in accordance with the "ARRIVE" (Animals in Research: Reporting In Vivo Experiments) principles. These are conveniently detailed in their own editorial:

McGrath JC, Drummond GB, McLachlan EM, Kilkenny C, Wainwright CL.Guidelines for reporting experiments involving animals: the ARRIVE guidelines.Br J Pharmacol. 2010 Aug;160(7):1573-6. doi: 10.1111/j.1476-5381.2010.00873.x.

and paper on the guidelines:

Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG; NC3Rs Reporting Guidelines Working Group.Animal research: reporting in vivo experiments: the ARRIVE guidelines. Br J Pharmacol. 2010 Aug;160(7):1577-9. doi: 10.1111/j.1476-5381.2010.00872.x.

The editorial has been cited 270 times. The guidelines paper has been cited 199 times so far and the vast, vast majority of these are in, you guessed it, the BRITISH JOURNAL OF PHARMACOLOGY.

One might almost suspect the journal now has a demand that authors indicate that they have followed these ARRIVE guidelines by citing the 3 page paper listing them. The journal IF is 5.067 so having an item cited 199 times since it was published in the August 2010 issue represents a considerable outlier. I don't know if a "Guidelines" category of paper (as this is described on the pdf) goes into the ISI calculation. For all we know they had to exempt it. But why would they?

And I notice that some other journals seem to have published the guidelines under the byline of the self same authors! Self-Plagiarism!!!

Perhaps they likewise demand that authors cite the paper from their own journal?

Seems a neat little trick to run up an impact factor, doesn't it? Given the JIT and publication rate of real articles in many journals, a couple of hundred extra cites in the sampling interval can have an effect on the JIT.

11 responses so far

The 2012 Journal Impact Factors are out

Jun 24 2013 Published by under Impact Factor, Scientific Publication

Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains....I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.

whoo-hoo!

Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.

So I thought I'd take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
2012-ImpactFactor1
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience's journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I'm going to score the last couple of years' of gain to this, personally.

Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That's a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index....I do wonder why the pre-print queue length is not of concern to ISI.

PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.

Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don't enjoy as much time to acquire citations.

28 responses so far

Placing PLoS ONE in the appropriate evaluative context

Jan 14 2013 Published by under Impact Factor, PLoS ONE, Ponder

As you know, I have a morbid fascination with PLoS ONE and what it means for science, careers in science and the practices within my subfields of interest.

There are two complaints that I see as supposed objective reasons for old school folks' easy complaining bout how it is not a real journal. First, that they simply publish "too many papers". It was 23,468 in 2012. This particular complaint always reminds me of

which is to say that it is a sort of meaningless throwaway comment. A person who has a subjective distaste and simply makes something up on the spot to cover it over. More importantly, however, it brings up the fact that people are comparing apples to oranges. That is, they are looking at a regular print type of journal (or several of them) and identifying the disconnect. My subfield journals of interest maybe publish something between about 12 and 20 original reports per issue. One or two issues per month. So anything from about 144 to 480 articles per year. A lot lower than PLoS ONE, eh? But look, I follow at least 10 journals that are sort of normal, run of the mill, society level journals in which stuff that I read, cite and publish myself might appear. So right there we're up to something on the order of 3,000 article per year.

PLoS ONE, as you know, covers just about all aspects of science! So multiply my subfield by all the other subfields (I can get to 20 easy without even leaving "biomedical" as the supergroup) with their respective journals and.... all of a sudden the PLoS ONE output doesn't look so large.

Another way to look at this would be to examine the output of all of the many journals that a big publisher like Elsevier puts out each year. How many do they publish? One hell of a lot more that 23,000 I can assure you. (I mean really, don't they have almost that many journals?) So one answer to the "too many notes" type of complaint might be to ask if the person also discounts Cell articles for that same reason.

The second theme of objection to PLoS ONE is as was recently expressed by @egmoss on the Twitts :

An 80% acceptance rate is a bit of a problem.

So this tends to overlook the fact that much more ends up published somewhere, eventually than is reflected in a per-journal acceptance rate. As noted by Conan Kornetsky back in 1975 upon relinquishing the helm of Psychopharmacology:

"There are enough journals currently published that if the scientist perseveres through the various rewriting to meet style differences, he will eventually find a journal that will accept his work".

Again, I ask you to consider the entire body of journals that are normal for your subfield. What do you think the overall acceptance rate for a given manuscript might be? I'd wager it is competitive with PL0S ONE's 80% and probably even higher!

49 responses so far

Reviewing your CV by Journal Impact Factor

So one of the Twitts was recently describing a grant funding agency that required listing the Impact Factor of each journal in which the applicant had published.

No word on whether or not it was the IF for the year in which the paper was published, which seems most fair to me.

It also emerged that the applicant was supposed to list the Journal Impact Factor (JIF) for subdisciplines, presumably the "median impact factor" supplied by ISI. I was curious about the relative impact of listing a different ISI journal category as your primary subdiscipline of science. A sample of ones related to the drug abuse sciences would be:

Neurosciences 2.75
Substance Abuse 2.36
Toxicology 2.34
Behavioral Sciences 2.56
Pharmacology/Pharmacy 2.15
Psychology 2.12
Psychiatry 2.21

Fascinating. What about...
Oncology 2.53
Surgery 1.37
Microbiology 2.40
Neuroimaging 1.69
Veterinary Sciences 0.81
Plant Sciences 1.37

aha, finally a sub-1.0. So I went hunting for some usual suspects mentioned, or suspected, as low-cite rate disciplines..
Geology 0.93
Geosciences, multidisc 1.33
Forestry 0.87
Statistics and Probability 0.86
Zoology 1.06
Forestry 0.87
Meteorology 1.67

This a far from complete list of the ISI subdisciplines (and please recognize that many journals can be cross-listed), just a non-random walk conducted by YHN. But it suggests that range is really restricted, particularly when it comes to closely related fields, like the ones that would fall under the umbrella of substance abuse.

I say the range is restricted because as we know, when it comes to journals in the ~2-4 IF range within neuroscience (as an example), there is really very little difference in subjective quality. (Yes, this is a discussion conditioned on the JIF, deal.)

It requires, I assert, at least the JIF ~6+ range to distinguish a manuscript acceptance from the general herd below about 4.

My point here is that I am uncertain that the agency which requires listing disciplinary medians JIFs is really gaining an improved picture of the applicant. Uncertain if cross-disciplinary comparisons can be made effectively. You still need additional knowledge to understand if the person's CV is filled with Journals that are viewed as significantly better than average within the subfield. About all you can tell is that they are above or below the median.

A journal which bests the Neurosciences median by a point (3.75) really isn't all that impressive. You have to add something on the order of 3-4 IF points to make a dent. But maybe in Forestry if you get to only a 1.25 this is a smoking upgrade in the perceived awesomeness of the journal? How would one know without further information?

15 responses so far

A smear campaign against Impact Factors...and the Sheep of Science

Aug 13 2012 Published by under Impact Factor, Scientific Publication

Stephen Curry has a nice lengthy diatribe against the Impact Factor up over at the occam's typewriter collective. It is an excellent review of the problems associated with the growing dominance of journal Impact Factor in the career of scientists.

I am particularly impressed by:

It is time to start a smear campaign so that nobody will look at them without thinking of their ill effects, so that nobody will mention them uncritically without feeling a prick of shame.

Well, of course I would be impressed, wouldn't I? I've been on the smear campaign for some time.

The problem I have with Curry's post is the suggestion that we continue to need some mechanism, previously filled by journal identity/prestige, as a way to filter the scientific literature. As he quoted from a previous Nature EIC:

“nobody wants to have to wade through a morass of papers of hugely mixed quality, so how will the more interesting papers […] get noticed as such?”

This is the standard bollocks from those who have a direct or indirect interest in the GlamourMag game. Stephen Curry responds a bit too tepidly for my taste:

The trick will be to crowd-source the task.

Ya think?

Look, one of the primary tasks of a scientist is to sift through the literature. To review data that has been presented by other scientists and to decide, for herself, where these data fit. Are they good quality but dull? Exciting but limited? Need verification? Require validation in other assays? Gold-plated genius ready for Stockholm?

This. Is. What. We. Do!!!!!!

And yeah, we "crowdsource" it. We discuss papers with our colleagues. Lab heads and trainees alike. We come back to a paper we've read 20 times and find some new detail that is critical for understanding something else.

This notion that we need help "sifting" through the vast literature and that that help is to be provided by professional editors at Science and Nature who tell us what we need to pay attention to is nonsense.

And acutely detrimental to the progress of science.

I mean really. You are going to take a handful of journals and let them tell you (and your several hundred closest sub-field peers) what to work on? What is most important to pursue? Really?

That isn't science.

That's sheep herding.

And guess what scientists? You are the sheep in this scenario.

22 responses so far

A significant change in Impact Factor

Jul 02 2012 Published by under Impact Factor

I received a kind email from Elsevier this morning, updating me on the amazing improvement in 2011 Impact Factor (versus 2010) for several journals in their stable of "Behavioral & Cognitive Neuroscience Journals". There are three funny bits here, first that the style was:

2010 Impact Factor WAS 2.838, 2011 Impact Factor NOW 3.174

You have to admit the all-caps is a crack up. Second, THREE decimal places! Dudes, this shit is totally precise and that means....sciencey.

As you know, however, DearReader, I have a rather unhealthy interest in the hilariousity of the Impact Factor and I was thinking about the more important issue here.

Is this a significant difference? Who gives a hoot if the IF goes up by 0.336? Is this in any way meaningful?

I suspect the number of available citations is ever on the increase. The business of science is ever expanding, the pressure to publish relentless and the introduction of new journals continues. This means that IFs will be on some baseline level of background increase over time. This is borne out, I will note, by my completely unscientific tracking of journals most closely related to my interests over the past *cough*cough* years *cough*cough*. They all seem to have gradually inched up a few decimal points year in, year out.

For the 0.336 increase, let us do a little seat of the pants. Let's say a journal with 20 articles per issue, 12 issue per year....480 items over the 2 year tracking interval for calculating IF. Round it to 160 extra citations*. If only 17% of the articles got two more citations, this would account for it. If a mere 3% of articles turned out to be AMAZING for the sub-sub-sub field and won an extra 10 citations each....this would account for the change.

For one thing, I can now see why editors would be willing to try the "Cite us a few more times" gambit with authors in the review stage. It doesn't take many intimidated authors throwing in 4-5 more citations of recent work from the journal in question to move a third of an impact factor.

Heck, just one solo operator author could probably make a notable impact over two years. If I put everything we submit into a single journal over two years time, and did my level best to make sure to cite everything plausibly relevant from that journal, I could generate 40 extra citations in two year easily. Probably without anyone so much as noticing what I was up to!

The fact that the vast majority of society rank journals that I follow fail to experience dramatic IF gains suggests that nobody is trying to game the system like this and that seemingly universal increases are a reflection of overall trends for total number of publications. But it does make you wonder about those few journals that managed to gain** a subjective rank over a few years time, say from the 2-4 to the 6-8 range and just how they pulled it off.

Additional:
This tool permits you to search some citation trends by journal.
__
*Yes, I realize the overlap year for adjacent annual IFs. For our thought exercise, imagine it is non-overlapping years if this bothers you.

**My hypothesis is that an editorial team would only have to pull shenanigans for 2-4 years and after that the IF would be self-sustaining.

14 responses so far

Reviewing academic papers: Observation

When you are reviewing papers for a journal, it is in your best interest to stake out papers most like your own as "acceptable for publication".

If it is a higher IF than you usually reach, you should argue for a manuscript that is somewhat below that journal's standard.

If it is a journal in which you have published, it is in your interest to crap on any manuscript that is lesser than your typical offerings.

16 responses so far

Older posts »