Repost: A Modest Proposal on Impact Factors

Nov 18 2009 Published by under Science Publication

Answering my question from yesterday it appears that I have done relatively little bashing of the Impact Factor in recent months. Odd that. And since our beloved commenter whimple is stirring up trouble I thought I'd repost something that appeared Sept 21, 2007 on the old blog. I also ran across this post relevant to the malleability of the IF.


People argue back and forth over whether Impact Factor of journals, the h-index, Total Cites, specific paper cites, etc should be used as the primary assessment of scientific quality. Many folks talk out of both sides of their mouths, bemoaning the irrelevance of journal Impact Factor while beavering away to get their papers into those journals and using the criterion to judge others. In this you will note people arguing the case that makes their CV look the best. I have a proposal:


Instead of focusing on the journal's Impact Factor what we really need to focus on is how a given paper performs relative to expectation. After all, you should publish your paper in the most appropriate forum, right? Fit on length, completeness, desired audience, methodologies employed, scientific subfield, etc. So if your paper is in the right journal, what we really need to know is how it performs, citation-wise, versus the average for the journal. If you are in a very large, active and citation-generating field you should be judged against this metric, rather than against smaller and slower fields.
So we need a d-index (If Hirsch can call his the h-index, I'm happy to follow suit). This would be the number of actual citations for your paper expressed as a z-score for the distribution of citations in the journal over the previous, oh, ten-year interval.
Issues:
"but, but....I got my paper into Science. That proves it is good!" Nope, sorry, "true peer review starts after publication".
"but it will change over time as cites start to accumulate". So? A paper's true worth should only be determined over time as parts of it are replicated and/or applied to other scientific questions. This is what any cite metric is about anyway. The current focus on journal Impact is simply a very indirect predictor.
"this will encourage journal-slumming" No it won't. All other influences which determine where papers are submitted are not going to magically disappear.
UPDATE 09/25/07: From the article whimple identified in the comments a description of cite distribution "from the theory section of the SPIRES database in high-energy physics"

About 50% of all papers have two or fewer citations; the average number of citations is 12.6. The top 4.3% of papers produces 50% of all citations whereas the bottom 50% of papers yields just 2.1% of all citations.

This also put me back on the track of two relevant papers on citation distributions from Per O Seglen (1992; 1994)



Also, I added this comment:
Finally ran across some Nature editorial comments I was remembering when I wrote this entry.

The most cited Nature paper from 2002−03 was the mouse genome, published in December 2002. That paper represents the culmination of a great enterprise, but is inevitably an important point of reference rather than an expression of unusually deep mechanistic insight. So far it has received more than 1,000 citations. Within the measurement year of 2004 alone, it received 522 citations. Our next most cited paper from 2002−03 (concerning the functional organization of the yeast proteome) received 351 citations that year. Only 50 out of the roughly 1,800 citable items published in those two years received more than 100 citations in 2004. The great majority of our papers received fewer than 20 citations.

18 responses so far

  • Pascale says:

    Death to the Impact Factor!

  • Eric Lund says:

    "but it will change over time as cites start to accumulate". So? A paper's true worth should only be determined over time as parts of it are replicated and/or applied to other scientific questions.
    Exactly. The IF is by design limited to citations by papers published in the year for which it is calculated of papers published in the two preceding years, according to Wikipedia. (That is even worse than I thought; I had been under the impression that it was five years.) That biases the IF in favor of "hot papers" which draw lots of immediate citations, rather than influential papers which accumulate their citations over a period of years if not decades. It also means, in practice, that a fraudster like Jan-Hendrik Schön (who specialized in writing hot GlamourMag papers) will tend to pump up the IF of journals he publishes his worthless papers in, and those IFs will remain higher than they should be because the IF computation algorithm doesn't care that nobody cites his papers anymore.

  • Dr Becca says:

    Agreed, DM! However--in the science world, reward and recognition come infrequently and unpredictably. We work our butts off for modest (to speak euphemistically) salaries, we've got 7-12 years of training under our belts, blah blah etc. A high-IF publication is one of the very few tangible symbols of our success we have, and while most people will be quick to point out how recent Science paper XYZ is crap, there isn't a single one of us who wouldn't be really, really happy to have one of our own. False as IF may be, we have little else to latch on to, you know?

  • LadyDay says:

    I agree with you that true "peer review starts *after* publication" and that citations per paper should be judged in context (e.g. if the number of citations is larger or smaller relative to other papers within the field).
    *However* there are some cases where otherwise excellent papers will still not be given appropriate credit.
    1.) Papers in which the interpretation of the data are faulty, which are then cited by other papers in a "negative" sense. For instance, a publication that is cited by other groups because the authors of the initial publication failed to take into account parameters that affected their data.
    2.) Papers which present data that are "ahead of their time" - the will probably be cited more in the long run, but in the immediate future, they may not be cited at all. The concepts are novel and not yet accepted by the mainstream.
    That said, I agree with you that impact factor is a rather shallow way to "measure" someone's science (I'm paraphrasing here). How many of us know of papers that don't merit Science or Nature "status" but are published there, anyway? And, meanwhile, similar and often more in-depth papers containing data with better controls are published in lower impact journals. Sometimes, it appears to be the name that's on the paper that gets it into one journal vs. another.

  • Dr Becca says:

    Sorry, I feel like I hit "post" before I finished my thought. What I was getting at is that people will likely be reluctant to let go of IF, even if they mostly hate it.

  • DrugMonkey says:

    there isn't a single one of us who wouldn't be really, really happy to have one of our own.
    Sadly, you are quite mistaken. I say sadly because you have drunken the KoolAide and are poised to continue the problem. Sad.
    I, for one, am entirely uninterested in such tangible measures of my science. I have one interest and one interest only in GlamorPubs. Because they would enhance my continued ability to do what I think is really important scientifically. Things which are essentially antithetical to the GlamourPub game, btw.
    Take GlamourBedazzlement out of the eyes of those who might be judging me in the future and my interest goes right to zero. It is simply no marker for me.
    As you can tell from the post though, I do get a charge out of being cited. Also, out of download stats.

  • David says:

    If I were a journal publisher, I would publish the narrow-circulation (meaning: nobody ever sees it) "Journal of Articles Referencing Articles in My Other Journals." Each article in it would be "recent developments in science show lots of stuff", followed by a thousand citations.
    There's a similar strategy for authors:
    http://www.cs.rutgers.edu/~watrous/citation-chain-letter.html

  • Eric Lund says:

    A highly cited publication is one of the very few tangible symbols of our success we have
    Fixed your typo, Becca. When you get further down your career path, that's what people will want to know. I'm happy to see one of my papers get 40 citations (which is quite high for my field), and if it does I don't care whether it was in a GlamourMag or something like the Journal of Exhaustive Studies of the Inconsequential. The h-index has its own problems, but at least it is tied to your work, not letting you bask in the glory of any GlamourPubs you may have.
    Journal of Articles Referencing Articles in My Other Journals
    Elsevier has already managed to produce something close to this, except that it's a journal they actively market, and it's closer to a Journal of Articles Referencing Other Articles in This Journal. Google the name El Naschie if you want to know more. This is of course yet another way to game the IF (and the h-index): the journal which El Naschie formerly edited has one of the highest IFs of any mathematical journal despite being full of articles that many in the field consider pure nonsense (and you will notice a certain name appearing frequently in the author lists).

  • DSKS says:

    What happens if one publishes a paper so devastatingly thorough, so exhaustive, so outstandingly the "Last Word", that everybody in the field is struck dumb and incapable of coming up with anything further to submit on the subject that would warrant citation of the aforementioned work?
    Because that happens to me all the time.

  • LadyDay says:

    Eh, re-reading my original comment (#4), I realized that my head-cold is impacting my language skills. : ) So, that second sentence should be: "*However* there are some cases where papers will still not be given appropriate credit."
    (Just take out the words "otherwise excellent").

  • Alex says:

    DSKS-
    What a coincidence! That happens to me all the time too!
    We should start a club.

  • antipodean says:

    The impact factor is crappy but as Dr Becca says it's the only thing that works for us young 'uns.
    Every other metric waas invented in order to pump up the importance of old people.
    We don't have until retirement day to demonstrate our scientific worth. Indeed a very senior academic in the major grant awarding body in my guessable country told me as much recently. If you're a post doc then impact factors are the way to go. If you're a senior postdoc/junior faculty then you need IF and some citation counts (and the H). If you're old argue against IF and trumpet your H and citations.
    IF is the worst system ever devised, except for all the others...

  • BillJ says:

    The two year timeframe of the impact factor makes sense for some fields. ISI offers the five-year version and the one-year immediacy index. You have to look at the average citing half-life in the field to judge whether older articles are frequently of lasting value and which duration you ought to be looking at. Five years really isn't long enough for most fields, though; I'd like to see a ten-year IF.
    Of course, all IF was designed to do is to highlight smaller journals with low total cites that are publishing valuable material. For that purpose, it works fairly well. It makes little sense to judge a researcher by the IFs of where she publishes.

  • DrugMonkey says:

    It makes little sense to judge a researcher by the IFs of where she publishes.
    ...and yet we keep right on doing it BillJ. In some places this is built pretty explicitly into things like hiring and promotion/tenure.
    Journal editors keep right on handwringing/bragging in their annual Editorial Board meeting about 1 IF changes relative to their competition.
    Journal publishers put that IF up front and center on their Journal website.
    is this the very definition of insanity?

  • Eric Lund says:

    all IF was designed to do is to highlight smaller journals with low total cites that are publishing valuable material. For that purpose, it works fairly well.
    I strongly disagree with that statement. Take a look at the table in the Wikipedia article. You will see that this metric favors journals that publish nothing but review papers (and therefore by design contain little or no original science): the top four IFs in 2003 belonged to biomedical review journals, and apart from Nature and Science (which publish papers from all fields including biomedical) the only non-biomedical journal in the top ten was a physics review journal. My primary scientific society publishes several journals, and again the one with the highest IF is purely a review journal. This should surprise no one, since it is easy (and frequently expected) to shorten the literature review in your own paper by referring readers to the review by Smith and Jones (2006).

  • eigen says:

    what do people think of eigenfactor, which is like pagerank applied to journals. that's nice, b/c PR was invented based on citation importance in journals

  • neurolover says:

    Shouldn't that be Igon factor?
    I'm starting to suspect that the various discussions about IF factor, or ranking of grants using various criteria, . . . all boil down to the same underlying issue: the impossibility of the interaction between a real meritocracy + a tournament model. A tournament model implies that small differences are going to be magnified, creating huge winners and looser. That makes every small detail in the evaluation critical, because it will draw the threshold between winners and looser. But, we are intrinsically incapable of ranking merit differences among individuals (especially the future predictions of what they will accomplish, rather than what they have already accomplished) at a fine scale.
    (the same problem produces craziness in college admissions)

  • antipodean says:

    Eigen factor strongly correlates with IF. It's providing essentially the same information.

Leave a Reply