CV alt metrics and Glamour

Nov 19 2015 Published by under Uncategorized

Putting citation counts for each paper on the academic CV would go a long way towards dismantling the Glamour Mag delusion and reorient scientists toward doing great science rather than the "get" of Glam acceptance.
Discuss.

41 responses so far

  • MoBio says:

    Perhaps...

    I just dislike the notion of having to let reviewers know how many times a paper has been cited due to wishful thinking that they would 'know' what work has been done in the field and whether or not one's own papers had heft.

  • jmz4gtu says:

    It may help the odd duck with a highly cited paper in a low impact journal, but for the most part, high citation rates correlate with being in a glam journal, in my experience. I know my boss always directs me to cite the relevant paper with the highest impact factor, if two papers both show the same thing (because of the stupid character count restrictions extending to the methods section BS).

    "I just dislike the notion of having to let reviewers know how many times a paper has been cited due to wishful thinking that they would 'know' what work has been done in the field and whether or not one's own papers had heft."
    -That's virtually impossible in some fields, though.

    What would help is some sort of visualization of citation networks. Kind of like those cluterf*ck transcription factor networks that the systems bio people use. You could shade and cluster for things like year of publication, impact factor, average H-index of the citing lab, maybe frequently associated MESH terms...

    Speaking of reliable altmetrics, out of curiosity, what incentives would you guys require to read a paper in your field, and write a 500 word or so review and give a 1-10 rating of it. 5$ starbucks card, pat on the back, discount with ThermoFisher or Sigma?

  • shrew says:

    I just spent a quality minute with the Google Scholar profiles of myself and some of my colleagues in various areas. Citations aren't everything, or so I tell myself.

    Specifically, citations are always going to be higher for science someone considers "vertically ascending" (that is, findings which are considered to be broadly applicable) versus science which reflects a "special case" (that is, findings which are perceived as reflecting specific vulnerability in a system to manipulation/condition X).

    Part of the issue is selling the special case as reflecting broader variability and revealing interesting new neuroscience, a skill which I am increasingly developing as a survival tool.

    Part of the issue, however, is that EVERYTHING is a special case, but some special cases (cocaine, BDNF, the hippocampus) are more equal than others (substituted cathinones, sex differences, substantia innominata). I reject the idea that science is better because it got more cites because it works on a trendier subject.

    There just really isn't a substitute for actually cracking the cover of the manuscript and seeing whether the authors made the jerk-off motion for 10 pages, or whether they actually wrote a Sick Pape.

  • odyssey says:

    I know my boss always directs me to cite the relevant paper with the highest impact factor, if two papers both show the same thing

    Your boss is an ass. Cite the one that was published first.

  • shrew says:

    odyssey and jmz -

    Oh yeah, I've been told that too. When I asked why, I was told the explicit justification was to help the paper land at a glam journal by showing that the cited literature forming the foundation of a paper was published at glam journals, displaying the significance and grandiosity of the current manuscript not unlike a peacock displays its tail. And upon hearing this I cried laser beams out of my eyeballs and laid waste to all civilization.

  • DJMH says:

    Cite counts for research related to clinical are typically higher than those for basic science, so doing this only pushes us faster to a world in which all of NIH is NCATS. Also what shrew said: some areas of the brain are more popular than others; doesn't always mean they're more valuable #visualcortex.

  • Noncoding Arenay says:

    "It may help the odd duck with a highly cited paper in a low impact journal, but for the most part, high citation rates correlate with being in a glam journal, in my experience."

    While that is true IME as well, I have found that the odd duck has a higher chance of being highly cited if it is considered vertically ascending work within the sub-field. In all these discussions, one needs to remember that there's "general glam" (C/N/S/etc) and then there's "sub-field glam". I have had great success with some of my sub-field glam publications.

  • drugmonkey says:

    #hippocampus

  • AcademicLurker says:

    On the CV I turned in with my tenure package, they asked for citation counts to be given for each paper.

  • jmz4gtu says:

    "Cite the one that was published first."
    -Usually this is the glam one, because, you know, novelty. However, not always. But I like to try to cite the source of the first and best evidence, which are not always the same paper (hence two citations). I just don't think there should be limits on citations as long as their not being used for pedantry.

    @shrew, at least your boss is honest.

    @DM: I'm pretty sure that's just cause it's the easiest, most readily identifiable one for us neuro-newbs.

  • zb says:

    At least we know that the visual cortex does something. What does the hippocampus do, anyway?

  • Morgan Price says:

    I think citation counts are nice for papers that are a few years old, but what about recent papers? This seems like a major issue when hiring postdocs or new PIs. And maybe even for tenure cases or grant renewals?

  • shrew says:

    jmz - My boss hates all of this dick-measuring nonsense. It was the collaborator who is glam-obsessed who pushed it.

    Morgan Price - this is a good point. Metrics are ill suited to taking a long view.

  • Established PI says:

    I don't see how citation counts would help and see many downsides., e.g. favoring clinical or disease-related areas with larger numbers of of investigators and disfavoring new papers. Using yet more metrics will be another excuse not to read the papers.

    Depressing to read comments above from @jmt4gtu and @shrew about preferential citation of glam pubs. I'm starting to think I live in a bubble - I've never heard of this practice, which is wrong, wrong. But journals do contribute to citation problems by limiting references.

  • Ewan says:

    My tenure c.v. required (and was explicitly analysed for) both IF and citation counts. The latter seemed to carry more weight (which was good for me!) but I got the sense that there was a bar that basically said 'at least some should be high-IF.' [The other quantifications of pubs were total pub count, pub count at this institution, trend in citations/year total, and h-index.]

  • Busy says:

    high citation rates correlate with being in a glam journal, in my experience.

    I'm going from memory, but I seem to remember this wasn't the case for the median glam paper which is as cited as papers in much less prestigious journals. However it was true that of the set of highly cited papers, a large percentage of them appeared in glam magazines.

    In other words

    glam mag ==> highly cited

    is generally false, while

    highly cited ==> glam magazine

    is generally true (with lots of exceptions).

  • Ass(isstant) Prof says:

    I have mixed feelings. It wasn't required, but I thought about adding citation counts to my tenure package. Then I thought better of it when finding that some of the oldsters on the committee (even those with long funding records) had highly cited occurring, however in a few cases of my papers. I've had a couple of instances where my society-level pubs either explained someone's results in a semi-glamourous journal, or even showed the same thing (several years earlier), and weren't cited.

    It makes me think I need to be in glam mags to be cited, but my group really doesn't have the manpower for that.

    So, yeah. It is sort of like standardized testing of school kids: it measures something, but I'm not sure it measures teacher effectiveness. So citation counts measure something: the number of citations. Correlation with significance of the work will vary.

  • qaz says:

    Part of the problem with the GlamourMag situation is that more people really are more likely to see, read, and cite papers published in GlamourMags. The problem isn't the fact that impact factor is journal based rather than paper based. The problem is that GlamourMags reach a larger audience and that we are judging papers by the size of the audience, not by the quality of the science.

    jmz4gtu's boss (and shrew's) represent not uncommon examples. I happen to know numerous examples where an author was forced to choose between publications because of the limitation of citation counts in many (particularly Glam) journals. In these cases, they tended to choose the Glam pubs. (At least that's what they told me, when I complained to them that they hadn't cited my non-Glam papers! 🙂 I've also noticed that my semi-Glam papers are often cited for things they are not really representative of, even in cases where I know that the author knows of other papers of mine that are less Glam but more appropriate cites.

    The Glam problem is due to sloppiness in the treatment of the scientific literature ("literachaw"), the limited time and ability most people have reading outside their field (the death of real education?), and the need to judge fields outside of our own.

  • Dusanbe says:

    A lot of human knowledge has come from small town grocers who were one of only a few people in the world working on a particular topic. I for one am glad we know as much as we do about ferns or dinoflagellates or feldspar. Sadly, it's tough enough to make a career these days studying such things, even without letting the popularity contests of altmetrics and post-pub review carry the day.

    It might make sense to use citation metrics if authors frequently cited works that had nothing to do with their own work, just for kicks, but that's not what citations are for, obviously. You're going to cite some paper that deals with what you are studying, and if you work on fruit flies, odds are it'll be a fruit fly paper and not a fern paper.

    Bottom line, science shouldn't be judged by how relevant it is to other scientists' work. Science should be judged based on its contribution to human knowledge of the natural world. That we have to remind ourselves of the true purpose of science is mindboggling.

  • "Cite the one that was published first"

    Alfred Russell Wallace?

  • Newbie PI says:

    I don't know where y'all are publishing, but I haven't encountered a journal with a citation limit that actually affected me in at least 10 years. I cite everything I can. It makes reviewers happy to see that I cited their papers, and then when it's published (if they have alerts set up like I do) all the relevant groups will get a notification saying my new paper cited them. I've been in a situation where I was scooped and we scrambled to get our paper out a few months later. I absolutely hate it when we don't get cited along with the first paper. Our methods and experiments were completely different and yet the conclusion was the same. Cite both!

    P.S. The only time I have encountered a reference limit was in writing a commentary for a fancy journal. But, the editor ultimately allowed extra references, so it was no big deal.

  • AAA says:

    I hate to say it but I also cite the glam paper if I am forced to choose. The reason (and it might be naive) is that if your paper cites a bunch of glam papers as foundation of your work, that gives an implicit justification that your work is "valuable".

    If you are busy reviewer, who did not have much expertise in the field, and you review 2 papers with comparable results, but one cites a list of glam papers as the foundation for their work, but the other one cites papers from journals you have never heard of and people you have not heard of, which one do you think has more "impact"?

  • Christina Pikas says:

    Who's citation numbers? Web of Science? Scopus? Google Scholar? Normalized by discipline? With fractional counting? Just a disembodied number?
    I think someone above alluded to clinical relevance/utility vs. research? What if a paper had dramatic impact on patient care but was never cited?

  • Dusanbe says:

    Nature has a hard limit of 30 references in the main text of a letter format paper. This is non-negotiable.

    However, you can cite an unlimited number of references in the supplement, and these are counted by Google Scholar at least.

  • MoBio says:

    @Christina:

    "What if a paper had dramatic impact on patient care but was never cited?'

    Am not aware of such a situation--at least in the past 20 years or so--do you have any examples that come to mind?

  • dsks says:

    "I hate to say it but I also cite the glam paper if I am forced to choose."

    If in doubt, cite the paper that did it with optogenetics. Or cryo-em in conjunction with super wow computational structural resolvifying pazzazio. A paper with a corresponding author who has given a TED talk trumps both, though. Just my 2 cents.

  • It might be a fun experiment to have a CV where citation counts are required and journal name is excluded.

  • Christina Pikas says:

    (hmm why aren't these comments threaded?)
    @MoBio
    maybe dramatic and never are exaggeration, but the team that was studying usage related metrics did find that in certain fields like nursing there were many more practitioners using the literature and so the usage based metrics didn't correlate well with citation based metrics

  • Geo says:

    I've always listed citation numbers for each of my paper on my CV. Seems fairly routine. Why would someone not do this?

  • drugmonkey says:

    Because it is uncommon in one's field or country?

  • MoBio says:

    @GEO

    Here in the US it seems fairly rare to see citation counts on CV or NIH biosketch. When I see non-US (Canadian, UK, EU) CV's these metrics seem common enough though I admit I ignore them.

    Here in the US as I think I may have posted before... I've never seen anyone swayed one way or another when citation counts were listed on a CV or NIH Bio in terms of whether the person should get tenure/promotion or their grant funded.

    Caveat: Occasionally when asked to write a letter for someone's promotion/tenure in the US if it looks like they may not be a 'slam dunk' I will emphasize that their work is 'highly cited' and give a few example pubs with the Google Scholar counts.

    Don't know if this makes any difference but at least I feel better.

  • drugmonkey says:

    I think you underline the subjective nature of CV/person review MoBio. Stats of any kind are not going to argue a naysayer into seeing how awesome you are but they may provide grist for your advocate. Since this isn't going to convince entrenched opposition, this is sprinkles on the frosting, at best.

  • drugmonkey says:

    It is weird, I will admit, that citation counts would be mostly viewed as "why didn't you get this into Cell if it is so awesome?", and never as "why was this in Nature if it only has 15 citations?".

  • Ola says:

    People won't admit to it, but many don't actually read the whole paper when citing. Rather, they're busy trimming up a paper for submission and they need a citation to support some random fact they remember from long ago. So they fire up PubMed, and the one that gets the cite is the one with the key piece of information in the ABSTRACT. Even if the reviewer doesn't have access to the obscure journal it's in, if the cited point is in the abstract then it's generally not a problem during review because everyone can use PubMed.

    Ergo write better abstracts if you want all teh citez.

  • MoBio says:

    @DM:

    "Since this isn't going to convince entrenched opposition, this is sprinkles on the frosting, at best."

    Always the optimist!

  • drugmonkey says:

    I think it is best if people realize how a-scientific the business of judging science really is, in reality.

  • AcademicLurker says:

    "why didn't you get this into Cell if it is so awesome?"

    I'd find it hard to resist throwing a chair at anyone who said something like this during a grant review meeting. Have you actually heard this?

  • shrew says:

    Academic Lurker: I heard a very nice version of this during a TT interview! I didn't throw any chairs because it was intended as a compliment to the actual paper, however depressing that may be.

  • shrew says:

    For clarity: the paper in question was in a very nice society journal, I didn't just send it to Brain Research and call it an afternoon. I explained that it had been shopped around to many a higher-impact publication and described the editors' (bullshit) reasons for rejection. This satisfied my interviewee and we went on to have a very interesting conversation about the actual science. But it chipped away at my well-being nonetheless.

  • Dusanbe says:

    "why didn't you get this into Cell if it is so awesome?"

    "I'd find it hard to resist throwing a chair at anyone who said something like this during a grant review meeting. Have you actually heard this?"

    I think there are two ways to read that, and only one right way if we consider the context was this paper had great metrics, was highly cited.

    As Shrew says, in that context, it would be a compliment. "Wow, this paper has been cited so many times, why didn't you shoot for Cell? It's obviously a great paper and you could've had a shot!".

    The other reading would be the typical glam humper asking how could any Society Journal paper be that influential given that it's not in C/S/N. I agree that usually warrants a chair thrown in that general direction.

  • The Other Dave says:

    Well, you could learn in detail the work of everyone in your field. Then you wouldn't need to even look at their CVs.

    But that's too much work. A CV is a nice summary of their accomplishments, right?

    You could read all the papers on their CV (or at least all the recent ones) and decide for yourself whether they were good or not.

    But that's too much work. The journal titles are a reasonable indicator of quality of the publications, right?

    You could familiarize yourself with with those journals to get a good idea of the things they publish.

    But that's too much work. The impact factor is a reasonable measure, right?

    You could look at the impact factor of each paper on the CV.

    But that's too much work. The H-index is reasonable, right?

    You could compare H-indices of people in your field.

    But that's too much work. Better just to throw your respect toward anyone whose name sounds familiar, or who you remember drinking with at the last meeting.

    ...and that's how we got to where we are today.

Leave a Reply