When was the last time I bashed the Impact Factor, anyway?

Nov 17 2009 Published by under Science Publication

There's a new entry up over at the Golden Thoughts blog (she's a nephrologist, so..yes) that talks about the all important journal Impact Factor, Harold Varmus' opinion of same and journals gaming the system.


Dr. Varmus pointed out that many of his most significant works appeared in "lesser" journals that served the appropriate audience for the science.
...
However, like all numbers, the IF can be gamed, and its validity has been questioned:
...
Dr. Varmus plead for an end to IF insanity.
...
The IF works about as well right now as the Bowl Championship Series algorithm does for college football.

Ouch, that last one is an insult that goes farther than I ever have.
Go Read.

11 responses so far

  • Eric Lund says:

    There is another issue I have encountered in my field, which probably doesn't arise in biomedical fields. Specifically, when somebody builds a new instrument or facility, they publish an article describing it in a certain journal which specializes in this sort of article. Anybody who uses data from this instrument/facility then cites the instrument paper, inflating the journal's importance. It's worse than the distortion of review papers, which tend to cite a few dozen to a few hundred other papers. Instrument papers have a much shorter reference list on average, and often several of those citations are to other papers in the same journal if not the same issue.
    The table at the Wikipedia entry shows several of the difficulties with IF. The top four journals by IF appear to consist only of review articles, as does the only journal in the top ten (Reviews of Modern Physics) that does not publish biomedical articles. As always, comparing journals across fields is problematic no matter which metric you use; biomedical articles, at least the ones that appear in the major journals seem to get cited at a much higher rate than articles in other fields. PageRank seems to do the best job of the three methods shown in the table; its top five entries include the Big Three general interest pubs (Nature, Science, and PNAS) plus two that each cover one broad field (Journal of Biological Chemistry and Physical Review Letters) and are generally considered the best in that field. Of course, PageRank can also be gamed, as was shown several years ago in the "miserable failure" incident.

  • whimple says:

    I love the impact factor. It's not perfect, but it gets the point across that a) there are quality journals and b) there are suck journals. The IF also nicely makes the point that the very best field-specific journals in some cases also suck in the big scheme of things. People wasting their lives in these field don't like to get this message, but it's true nonetheless.

  • Alex says:

    The IF also nicely makes the point that the very best field-specific journals in some cases also suck in the big scheme of things. People wasting their lives in these field don't like to get this message, but it's true nonetheless.
    Eh, in a lot of physical science fields the best journals have impact factors below most biomedical journals. I don't know that a person working in optics or materials science is really wasting his/her career.

  • MRW says:

    whimple - except that it doesn't make that point at all.
    There are field-specific cultures, such as the typical lengths of the references sections, that affect impact factors. That, among other things, makes impact factors a poor metric of how much a field "sucks".

  • DrugMonkey says:

    Sometimes I swear.if there were not a whimple I'd have to make one up just for the laffs...

  • An Engineer says:

    Oh whimple...
    Virtually every single engineering and applied-science focused journal has a much lower impact factor than theoretical journals. But, I for one am really happy that there are people publishing about litho machines, confocal microscopes, atomic force microscopes, etc. Because without tools like that, most science would have a real big problem doing experiments.

  • DSKS says:

    Heh heh. It must be April 1st on Whimple's planet.

  • Pascale says:

    For the record, I am a Pediatric Nephrologist, a field that consists of less than 500 active MDs and a handful of PhDs interested in renal development at any recent point in time. We have a single journal devoted to our field, and its impact factor reflects its limited audience. Occasional studies, like big clinical trials in the field, will make the glamor journals (The ESCAPE trial recently made NEJM: http://content.nejm.org/cgi/content/short/361/17/1639).
    So my field, based on the low IF its major journal receives, is a sucky, unimportant field, whimple? My patients and their parents don't believe that for a minute.

  • whimple says:

    So my field, based on the low IF its major journal receives, is a sucky, unimportant field, whimple? My patients and their parents don't believe that for a minute.
    Relative to other fields, yes. What's your view? Everything is equally important?

  • Alex says:

    In some absolute sense, some fields probably are less important than others. Answering a question with broader implications is more important than answering a question with narrower implications. Curing a disease that afflicts millions is more important than curing a disease that afflicts thousands.
    But that's different from saying that people in the less important fields are wasting their lives. That would imply that everybody SHOULD be doing the same thing. The most important fields (by whatever measure) might be saturated with people, and so it might be a good use of one's talent to join a small field and make a big impact than to join a big field and make a smaller impact. Some people might be better suited to different fields. And at the end of the day, actually solving a less significant problem is still more important than joining a herd and making little progress on a big problem.
    So, it's one thing to say that one field is more important than another and should get more people and commensurate dollars, and quite another to say that people in small fields are wasting their lives.

  • qaz says:

    Come on, people. We all know that science doesn't work by immediate implications. The effects of most scientific work isn't felt for 20 to 30 years. To say that the measure of scientific importance is the two-year impact factor is short-sighted, stupid, and completely at odds with what has made science great over the last several centuries.
    And we all know that IF is completely contaminated by culture. Physicists, mathematicians, computer scientists, genetics biology and neuroscience all have very different citation cultures, which changes the IF of the fields. Physicists tend to cite recent papers, but no one needs to cite Einstein. Mathematicians cite even less (they don't need to when everything follows by provable steps). Computer scientists tend to just cite the review, while Neuroscientists tend to cite the first paper and a review. Genetics cites everything under the sun. This completely changes the average IF within a field.
    Is IF valid within a field? I doubt it. Is IF a valid way to compare fields? Absolutely not.

Leave a Reply