Answering my question from yesterday it appears that I have done relatively little bashing of the Impact Factor in recent months. Odd that. And since our beloved commenter whimple is stirring up trouble I thought I'd repost something that appeared Sept 21, 2007 on the old blog. I also ran across this post relevant to the malleability of the IF.
People argue back and forth over whether Impact Factor of journals, the h-index, Total Cites, specific paper cites, etc should be used as the primary assessment of scientific quality. Many folks talk out of both sides of their mouths, bemoaning the irrelevance of journal Impact Factor while beavering away to get their papers into those journals and using the criterion to judge others. In this you will note people arguing the case that makes their CV look the best. I have a proposal:
Instead of focusing on the journal's Impact Factor what we really need to focus on is how a given paper performs relative to expectation. After all, you should publish your paper in the most appropriate forum, right? Fit on length, completeness, desired audience, methodologies employed, scientific subfield, etc. So if your paper is in the right journal, what we really need to know is how it performs, citation-wise, versus the average for the journal. If you are in a very large, active and citation-generating field you should be judged against this metric, rather than against smaller and slower fields.
So we need a d-index (If Hirsch can call his the h-index, I'm happy to follow suit). This would be the number of actual citations for your paper expressed as a z-score for the distribution of citations in the journal over the previous, oh, ten-year interval.
"but, but....I got my paper into Science. That proves it is good!" Nope, sorry, "true peer review starts after publication".
"but it will change over time as cites start to accumulate". So? A paper's true worth should only be determined over time as parts of it are replicated and/or applied to other scientific questions. This is what any cite metric is about anyway. The current focus on journal Impact is simply a very indirect predictor.
"this will encourage journal-slumming" No it won't. All other influences which determine where papers are submitted are not going to magically disappear.
UPDATE 09/25/07: From the article whimple identified in the comments a description of cite distribution "from the theory section of the SPIRES database in high-energy physics"
About 50% of all papers have two or fewer citations; the average number of citations is 12.6. The top 4.3% of papers produces 50% of all citations whereas the bottom 50% of papers yields just 2.1% of all citations.
Also, I added this comment:
Finally ran across some Nature editorial comments I was remembering when I wrote this entry.
The most cited Nature paper from 2002−03 was the mouse genome, published in December 2002. That paper represents the culmination of a great enterprise, but is inevitably an important point of reference rather than an expression of unusually deep mechanistic insight. So far it has received more than 1,000 citations. Within the measurement year of 2004 alone, it received 522 citations. Our next most cited paper from 2002−03 (concerning the functional organization of the yeast proteome) received 351 citations that year. Only 50 out of the roughly 1,800 citable items published in those two years received more than 100 citations in 2004. The great majority of our papers received fewer than 20 citations.