A significant change in Impact Factor

Jul 02 2012 Published by under Impact Factor

I received a kind email from Elsevier this morning, updating me on the amazing improvement in 2011 Impact Factor (versus 2010) for several journals in their stable of "Behavioral & Cognitive Neuroscience Journals". There are three funny bits here, first that the style was:

2010 Impact Factor WAS 2.838, 2011 Impact Factor NOW 3.174

You have to admit the all-caps is a crack up. Second, THREE decimal places! Dudes, this shit is totally precise and that means....sciencey.

As you know, however, DearReader, I have a rather unhealthy interest in the hilariousity of the Impact Factor and I was thinking about the more important issue here.

Is this a significant difference? Who gives a hoot if the IF goes up by 0.336? Is this in any way meaningful?

I suspect the number of available citations is ever on the increase. The business of science is ever expanding, the pressure to publish relentless and the introduction of new journals continues. This means that IFs will be on some baseline level of background increase over time. This is borne out, I will note, by my completely unscientific tracking of journals most closely related to my interests over the past *cough*cough* years *cough*cough*. They all seem to have gradually inched up a few decimal points year in, year out.

For the 0.336 increase, let us do a little seat of the pants. Let's say a journal with 20 articles per issue, 12 issue per year....480 items over the 2 year tracking interval for calculating IF. Round it to 160 extra citations*. If only 17% of the articles got two more citations, this would account for it. If a mere 3% of articles turned out to be AMAZING for the sub-sub-sub field and won an extra 10 citations each....this would account for the change.

For one thing, I can now see why editors would be willing to try the "Cite us a few more times" gambit with authors in the review stage. It doesn't take many intimidated authors throwing in 4-5 more citations of recent work from the journal in question to move a third of an impact factor.

Heck, just one solo operator author could probably make a notable impact over two years. If I put everything we submit into a single journal over two years time, and did my level best to make sure to cite everything plausibly relevant from that journal, I could generate 40 extra citations in two year easily. Probably without anyone so much as noticing what I was up to!

The fact that the vast majority of society rank journals that I follow fail to experience dramatic IF gains suggests that nobody is trying to game the system like this and that seemingly universal increases are a reflection of overall trends for total number of publications. But it does make you wonder about those few journals that managed to gain** a subjective rank over a few years time, say from the 2-4 to the 6-8 range and just how they pulled it off.

Additional:
This tool permits you to search some citation trends by journal.
__
*Yes, I realize the overlap year for adjacent annual IFs. For our thought exercise, imagine it is non-overlapping years if this bothers you.

**My hypothesis is that an editorial team would only have to pull shenanigans for 2-4 years and after that the IF would be self-sustaining.

14 responses so far

  • Beaker says:

    This is slightly lateral to the topic of this post, but have you tried the Google citation analysis? Go to Google Scholar and click on the "my citations" box in the upper right. The interface kicks butt compared to the crappy Web of Science/Knowledge (what's the difference/hell if I know). It also has a metric for journal impact, which may or may not suffer from the same weaknesses as Impact Factor. I have not explored it much yet. Regardless, I will never need to use ISI-generated metrics again--ISI went to hell as soon as they got bought by mega-publisher Thompson (who also decimated Endnote when they swallowed it up).

  • Dude, I know you enjoy this kind of mental masturbation, but the *vastly* more important means of gaming IF is to publish lots of review articles, which tend to get cited *much* more than research articles.

    I would really like to see the IFs of the "high-impact" field-specific journals without all the review articles then go head-to-head with the premiere society journals, most of which publish almost no review articles at all. Some of these journals publish several special review article issues every goddamn motherfucken year.

  • Budde says:

    Don't hate the playah, Proffe, hate the game.

  • DJMH says:

    I would really like to see the IFs of the "high-impact" field-specific journals without all the review articles then go head-to-head with the premiere society journals, most of which publish almost no review articles at all.

    Betcha in this age of google scholar it would not be too hard to calculate. J Neurophys did an analysis of male/female first, last authors, editors, and editorial outcomes a year or so ago--maybe they can put some effort into this next. I bet it would be a highly cited article, so win-win-win...

  • VIrgil says:

    Actually, a few journals I'm on the EBs of have posted lower IFs this year. Most likely to do with contraction in funding (surprise... yawn... people publish less in specialty journals when funding gets tough, preferring to hang on for a big impact paper). In at least one case I know it's because the # of review articles has dropped precipitously (surprise... yawn... people don't have time to write reviews when funding gets tough). In another case it's because a competing journal is literally handing our ass to us on a plate (surprise... yawn... in a very small field, addition of 1 or 2 new journals can have a huge effect on the existing players in that field).

  • Susan says:

    What does the t-test say? Because when I see the word 'significant' I immediately look for quantification. 'Notable', fine, but don't be using the word 'significant' when you don't mean 'statistically significant'. That's the difference between science-y writing and science writing.

  • DrugMonkey says:

    People are "hanging on for big impact"? That would be insane. I would suspect the time spent seeking funding is just plain reducing the time available for paper output. Or else people are pursuing more Preliminary Data at the expense of closing out stories.

    Anyone out there holding off publishing because they want some key figures to be "Preliminary" and not already-completed?

  • DrugMonkey says:

    Susan- you are being ridiculous. The thieving of the word "significant" without modifier is beyond twee and moving into actively harmful territory.

  • I try to ignore IF as much as I can, its hard though.

  • Did you see yesterday's Retraction Watch post titled "A first? Papers retracted for citation manipulation"?

    "In what appears to be a first, two papers have been retracted for including citations designed to help another journal improve its impact factor rankings. The articles in The Scientific World Journal cited papers in Cell Transplantation, which in turn appears to have cited to a high degree other journals with shared board members.

    [...]

    Self-citation at journals — in which papers cite other recent articles in the journal to boost the title’s impact factor, a measure of how often, on average, studies are cited in the previous two years — is a well-described phenomenon. Those who get caught practicing it are barred from Thomson Reuters’ Journal Citation Reports, a ranking of titles by impact factor.

  • DrugMonkey says:

    that is very interesting Cath. yes, I was aware of the accusation but didn't realize it had been verified or sanctioned. I am interested in how many is the threshold for "excessive". Like I said here, I think you could be more modest in your machiavellian citing practices and totally get away with it.

  • DrugMonkey says:

    I do happen to know an Editor that appears to publish a high percentage of his own work in his journal. I don't have any reason to question the quality and I always figured at his stage of the game he just didn't give a crap about IF or appearances and just wanted to get the data out. I hadn't considered that maybe he figures he gets more citations than average for the journal and is trying to boost the overall IF. Or maybe he's doing the subterranean extra-cite thing... 🙂

  • JF says:

    Yes, a single author can change the impact factor of a journal. Or the research ranking of a university.

    I was just reading this right before coming over here:

    http://janneinosaka.blogspot.com/2012/07/should-you-trust-university-rankings.html

  • SMD says:

    Is there a minimum number of articles necessary for a journal to be considered for an impact factor? That is to say, if a journal were to only publish one review article per year for two years and those two papers were cited 40 times each, would it have an IF of 40 starting that third year even though that only represents two articles?

Leave a Reply