Thought of the Day

Feb 03 2016 Published by under Science Publication, Scientific Publication

Dear Editor of Journal,

I find it interesting to review the manuscripts of ours that you have rejected on impact and quality grounds* over the past several years. We quite naturally found publication homes elsewhere for these manuscripts, this is how the system works. No harm, no foul. In none of these cases, I will note, was the manuscript radically changed in a way that would fundamentally alter the review of quality or impact as reflected in your reviewer's comments. Yet I note that these papers have been cited in excess, sometimes far in excess, of your Journal's Impact Factor. Given what we know about the skew in citations distributions which contribute to a JIF, well, this positions our papers quite favorably within the distribution of manuscripts you chose to accept.

This suggests to me there is something very wrong with your review process insofar as it attempts to evaluate quality and predict impact.


*journal fit is another matter entirely. I am not talking about those complaints.

10 responses so far

  • Jonathan Badger says:

    That's kind of why the PLOS ONE model makes more sense IMHO. It's one thing when a reviewer says the conclusions aren't supported (whether or not the reviewer is correct), but quite another when the reviewer says the paper is technically fine but "boring" and thus should be rejected on those grounds. Given that most people no longer read journals as such but rather individual articles found through title and keyword searches, "boring" doesn't really cut it. Let the reader decide.

  • drugmonkey says:

    No harm, no foul. Like I keep saying there *is* already a PLOS ONE type model at work in my field and there has been for decades. You can always get a reasonably decent study in somewhere, eventually. In the end. No matter how allegedly boring, just so long as it is reasonably sound. I term these the dump journals. In these particular cases, however, we did not have to resort to the very bottom. More like what I consider to be the meat of the society journal distribution.

  • jojo says:

    Heh. It will be nice to have a longer view so I can say HA like this some day. For me, my most cited papers inevitably seem to be the ones I am least excited about.

    Any plans to talk about this douchebag?

  • drugmonkey says:

    GRRRRRRR. That guy. Just.....

  • clueless noob says:

    DM: "You people work on citations, right?"
    Editor: "Yeah. "
    DM: "Big mistake. Big. Huge. "

  • drugmonkey says:


  • PepProf says:

    I don't get the notion of paper citations exceeding impact factor as proof of 'overperformance'. can someone explain the logic of this? I have many publications in medium IF journals with 10-20 citations. Does that mean they all belong somewhere better? I just get confused when people use this as a sort of rule-of-thumb.

  • Juan Lopez says:

    DM, I have felt the exact same as you describe. But it's a fallacy:

    I have rejected papers that were later published and are doing well in citations. What is broken? That many people, including the reviewers in the publishing journal, don't really know the details of the method used. Instead they are swayed by the pretty images and sweet writing. I know a few people who share my opinion that the papers are fundamentally flawed and the data invalid. But once published there really isn't much to do. The authors are a powerful group and I am not going to pick a fight with them unless I have anonymity protection. So, with enough clinicians buying into the marketing the papers are doing well.

  • drugmonkey says:

    PepProf: it has to be over the initial 2 y interval. But yes. If you buy the logic of a paper having sufficient importance and impact for a given journal, and you grudgingly acknowledge that JIF is a metric of that impact (which you should for this purpose), then a paper that beats the journal JIF over the appropriate citing interval is over performing. And the one that falls below is underperforming. Of course, you might want to take the skew into account but this just means the ones above the JIF are even better-scoring way above the median. The ones below the JIF might or might not be below the median.

    Suppose the reviewers have to state that this manuscript is or is not in the top 25% or top 10% of all papers. (this is a common feature of grant review systems in my experience). And say the journal will not consider anything below the top 25% . for example. then if the reviewers are saying it is not in the top 25%, it gets rejected and then it outperforms at least *half* of those allegedly top-25% papers... what else does this mean to you? The estimation was incorrect. It is the only possible interpretation of what just happened.

  • Mugwump says:

    That's an unbelievably snotty thing to say.

Leave a Reply