Citation practices

A simple question for my Readers about their behavior when drafting a manuscript or grant proposal or creating scientific presentation. How often do you consider, in any way, the identity of the journal in which a finding was published when making your choice?

57 responses so far

  • NatC says:

    A little - but I pay a lot more attention to what journal (and how recent) publications are when I am writing responses to reviewers who question the validity of a technique/analysis/interpretation.

  • Dan says:

    I'd say the converse is a stronger consideration: if a journal has been cited several times in the initial draft then it becomes a more obvious place to submit the manuscript.

  • GMP says:

    Did you mean: does the journal in which a paper appeared make a difference when I am making a choice whether to cite it or not? Short answer: generally no.

    I would say there is a wide range of journals considered respectable [ranging from solid (in my field IF greater than 3 or 4) to GlamourMag], and if a paper is in any of those journals I usually don't think twice about citing. If a paper has appeared in a lower-tier journal or a journal/book hard to come by electronically, I must admit I will look for an alternative reference. If the work is good and not too old, usually you can find another paper by the same group, probably even more complete, in a better journal.

    However, when I am pressed for space (some of the letter-format journals in my field cap the paper length at 3 or 4 pages) then I will restrict the number of papers I cite done by a given group or on similar topics and yes, probably pick the one, among several, which is best known/accepted. Usually, that will be the paper with the most citations and often happen to be in the flashiest journals.

  • Depends. If the paper is from people I've never heard of and the findings were published in the International Journal of Crud, I wouldn't give it much airtime. But, then again, I know of several very mediocre studies by Prof Big Wig that have been published in Very Good Journal simply because the PI knows the editor. Unfortunately, it is very likely that Prof Big Wig would be reviewing my manuscript or grant so I would have to cite her paper regardless of how crappy it is.

  • Phillip Bost says:

    I don't consider the journal. I do, however, consider the number of citations a paper has already garnered especially if it's older.
    If it was published in 2001 and only has two citations, then it's probably not the best citation source.
    I use Web of Science for this chore.

  • I generally don't consider the source at all. If I have a choice of several papers, and have a space limitation (so I can't cite them all), I would pick the one that I find most relevant. The one exception is if one of my choices is from the journal I am targeting with my manuscript, and then I pick that one.

  • pablito says:

    I'm pretty generous with citing papers if space is not limited, but give weight to Glamour mag cites if space is tight (like in a Glamour mag). Glamour mag cites are also preferred if I'm trying to make the point that an area of research is topical and important. I also (over) cite papers from people I think may be a reviewer. And papers with titles that state the point I'm trying to make get bonus points - this is one reason titles are important

  • Namnezia says:

    I usually go on how much I trust the results from the lab in question, not where the paper was published.

  • DrugMonkey says:

    Honestly? Am I the only dumbass sap in the entire world who cites what I feel is the best demonstration of the point at hand?

  • gnuma says:

    I don't consider the journal -- generally I cite the first demonstration of point.

  • Anonymous says:

    No, you are not.

  • People in my subfield (psycho/neuro-linguistics) care a great deal about primacy, so citing the first person to make a point is a given. But when adding additional references to bolster a point, I pick (a) the most informative papers, (b) the most-cited papers, the ones people will expect to see and think you're just ignorant if you don't cite, and (c) papers by anyone I think might be insulted if they review the manuscript and they aren't cited. I don't consider the journal very much. But I should also say that the tradition is generally to be complete and cite everything, so it ends up not mattering a lot.

    Trying to restrict to glamour mags would be largely a waste of time, though, since my subfield -- and even my field in general -- doesn't publish in them much, for various reasons.

  • Namnezia says:

    Right, but you implied that all else being equal - would you cite GlamourMag or other journal. Obviously, if one article is a better demonstration of the point, then you cite that.

  • GMP says:

    🙂 I think DM was setting us up a little bit here.

  • pinus says:

    wow...people pick what papers to cite based on how many citations it has? seriously? Based on that, once a paper gets enough citations, it is becomes unending loop of building citations?

    I pick based on what I am saying. For reviews and other general points that don't quite require a primary citation, I cite somebody who I think will review it.

  • becca says:

    I consider it a moral obligation to at least *consider* providing a citation that is open access for any point I consider truly critical to my argument. Particularly because I work on a disease that mostly poor people get.

  • Namnezia says:

    Becca - would it be acceptable to say in your acknowledgements something like: "If you are unable to access any of the references cited, please contact the author of this manuscript". Basically implying that you can send them a reprint obtained through your library but not blatantly violate copyright. Or would you be violating copyright laws?

  • GMP says:

    Namnezia -- I think you are always entitled to send a copy to anyone for "purposes of criticism and/or scholarship". That falls under so-called fair use and does not violate copyright laws. And you can almost always post a preprint of your work on your website (typically, with proper citation and a link to the publisher's site). ..

  • becca says:

    Namnezia- interesting question. IANAL, but I'm guessing actually sending them a reprint obtained through my library would be violating copyright laws. Which I am undecided about the morality of.

  • DrugMonkey says:

    Yeah, pinus, agree the circularity is weird. Citing the most-cited also recipe for misciting if you ask me.

  • DrugMonkey says:

    "trust"? That's interesting. What fraction of labs in your area do you generally mistrust?

  • DrugMonkey says:

    "implied"? "setting us up"?

    That's on you, my friends...

  • Jen says:

    When I published the paper from my graduate work, I could count on one hand how many papers dealt with the protein I was characterizing. It would have been very odd for me to leave any of them out of my paper, so I cited all of them. Now as a postdoc, I'm working in a new model system for which there are relatively few papers published, and they are all clustered in the same few journals. For my grant proposal, I did bias my citations towards more prestigious publications in an effort to establish the validity of my chosen model system/experimental approach.

  • qaz says:

    DM writes: “trust”? That’s interesting. What fraction of labs in your area do you generally mistrust?

    About 5%.

    And no, I won't say who they are publicly.

    In terms of citations: When I have a citation limit, I always cite the first paper for primacy and at least one more that is the best demonstration of my point or that is the most thorough review of it. In a few cases, there were two labs that got the same result at about the same time. In those cases, I try to make sure to cite both labs, including the one who got it into the RealJournal a few months after the one that got into the GlamourMag. When I don't have a citation limit, I make sure to get the first one [or two, see above] (for primacy), but then I go crazy with citations (as long as they demonstrate my point). I don't know anyone who has ever complained that they were included in a citation list, while I know lots of people who have complained that they weren't.

  • Dr. O says:

    I don't usually consider the journal, but if it's a highly-cited field and I have limited space, I'd probably go with a higher-impact paper with seminal findings, as well as articles that speak specifically to the results/problem I'm discussing. If it's a sparsely published topic, then I'd likely list everything out there, including papers from lower-impact journals, to provide a brief review.

  • Neuro-conservative says:

    Can somebody please explain why -- in the age of digital publishing -- there are limits on the number of citations you can have? I think accuracy and scholarship would improve if papers were more comprehensive in their approach to citations .

  • ginger says:

    I cite whatever comes closest to whatever it is I'm trying to describe. But if it comes down to choosing between a journal I know does peer review and one that doesn't, or a really crap journal (based on how often I flinch when I read the ToC/abstracts, not on IF) versus a not-so-crap one, I might pick the better journal, all other things being equal. But all other things aren't equal - most people who know how to design, analyse and write up research flinch when they read shitty journals, too, and don't submit there unless they have a real problem child of a paper. And most sensible people prefer to have their work peer-reviewed, if only because - reviewer 3 excepted - rigorous peer review improves papers.

  • Phillip Bost says:

    Circularity is weird? Maybe if you're already an expert in your field, writing reviews, etc. As a recent Master's graduate I think it's both a time saver and legitimate practice. Sure, there are times when I think it's appropriate to shirk this short-cut, but I don't know enough about my new field to do it all the time.
    I think citation nepotism is far more troubling than citing established papers.

  • DrugMonkey says:

    If by citing nepotism you mean preferentially choosing your tight science homies' papers, I'm not really seeing where that is any more objectionable than the practices described in this comment thread.

  • Neuro-conservative says:

    Citation nepotism, citation "glamorism," and citation circularity are all problems that would be mooted by my suggestion above. I will add another: citation selectivism -- selectively citing those articles that support the hypothesis but omitting citations to key failures to replicate or other contrary evidence.

  • DrugMonkey says:

    What ever happened to scholarship, anyway?

  • grumpy says:

    DM, word. I also try to cite whatever I think is most relevant (with priority going to primacy).

    To be fair, this usually means I end up citing papers I know the best, so probably if you did a statistical analysis on my citations you'd come up with plenty of favoritism-type biases but none of these are intentional.

    my guess is that my most common (subconscious) biases are (in order):
    1. stuff written in english
    2. self-citation
    3. citation of the "famous" groups/papers
    4. my buddies
    5. highly cited papers (from following a citation trail)
    6. journal IF

  • DrugMonkey says:

    Self-citation is a bias I wholeheartedly endorse, btw.

    I've sometimes seen uncited papers from fairly productive authors and wondered how they never managed to cite it in a subsequent paper. Crazy.

    /stupid Web of Science tricks

  • Christina Pikas says:

    You can't just send something you got from your library to someone else (legally). You might not even be able to send a copy of your own work if you signed away those rights.

  • Christina Pikas says:

    Hence Merton's "Matthew Effect" - the rich get richer and the poor get poorer. Even more so with link and popularity based search tools like Google Scholar

  • Christina Pikas says:

    people tend to cite the same papers over and over - once they find a good source of the information, why change? (if you get on someone prolific's favorites list, you're golden) So one recent article looking at impact looked the number of citers not citations - told a different story really.

  • When considering a manuscript for citation, I look at the science, not the journal. I have never, and never will, look at an article and say "Oh, published in Nature, now I need to work it into my citation list."

    I use EndNote, and keep my libraries sorted on discipline, so if I cite a journal in the introduction of one paper on a topic, chances are if it's a fairly basic principle, I'll cite it again in the next paper. I also try to go to the "source" which means in my field, I can wind up citing papers from the 50's and 60's. I keep those to a minimum, but I think it's refreshing to point out where a whole line of thought and research began. With common principles and knowledge though, I try to cite current review articles which do a good job at condensing all the relevant information and how it ties into the more recent literature.

    When I'm in the M&M section, I tend to cite my own work if it's a material or method I've described before. It'd seem silly not to do so. So like DM, I wholeheartedly endorse self-citation. If I can't even be bothered to cite my own work, why should I expect anyone else to?

    In the R&D sections, I tend to cite the articles which most similarly reflect the results I've garnered myself. If I'm studying genes A, B and C in environment Z, and some researcher in non-glamor-mag studied those same genes in environment Y, and they line up with mine ... damn skippy I'll be citing them!

  • DrugMonkey says:

    I have at least one paper that I am convinced gets cited mostly because it comes up first on PubMed. I.e. gets cited for the general aspects (being most recent example) rather than our specific reasons for the study.

  • DrugMonkey says:

    I probably do both the good and bad version of what Christina mentions. Good: it really is the best paper. Bad: the 3rd-5th cites for a point that could easily be replaced with some other papers. Spread the love kinda rationale...beat out by laziness. Perhaps I will take this thought to heart next time I am writing...

  • I absolutely consider the journal. Esp. when I do not have adequate time to do all of the fact-checking myself for every reference in my 200-reference review article.

    I generally like to provide a reference for the first demonstration of a point and then a subsequent well-done demonstration if available. In my field, the former is almost always in a GlamourMag, the latter can go either way.

  • ... I do not have adequate time to do all of the fact-checking myself for every reference in my ...

    Say what? You include references into an article without fact-checking them yourself? I can understand going through the abstract, skimming the R&D, and reading the conclusions ... but if you can't be bothered to read the damn article, why are you citing it?!

  • Yah ... I'm following up my own reply. If by fact-checking you mean subsequent articles which support and/or refute said article ... IMNSHO, if it's important enough to cite, then it's important enough to know the history behind the article and how it was received IN THE FIELD YOU'RE WRITING A REVIEW ARTICLE IN. If you don't know the history behind the literature in said field, let someone else write the review article.

  • Glad you could get your panties in a bunch. Way to jump to a whole lot of conclusions.

    I am not a biologist. When I cite biological articles, for example, on the demonstration that protein X is the ZOMG receptor for ligand Y- I am in absolutely no position to know how robust the biological science was behind said finding. I can read the entire article, sure, but my tiny engineer brain cannot, given the finite amount of time in life, comprehend whether the 52 controls done on the system are sufficient to reasonably support the conclusions.

    Thus, I am more inclined to trust GlamourMag journals with these sorts of things. The standard for the science, controls, etc, is on-average higher. For lack of any better option, I trust that the reviewers for these journals have done a lot of the fact-checking that I am not capable of.

  • Sounds like perhaps you should ask someone with biological know-how how well accepted those citations are. Perhaps even ask them to write up the relevant biological sections and give them co-authorship in said review (that seems to be the much better option).

    You are assuming that things in GlamourMag's are done with more rigor, and while in some cases that may true, I don't think it holds true in every instance. You're still rolling the dice, citing things you've barely delved into.

  • Dude, a citation is not a marriage proposal. It is an acknowledgement than someone did something that contributed to what we know about science. Your attitude that one must 100% understand the science in a manuscript as well as be aware of all subsequent responses to the paper, etc in order to simply cite it is ludicrous and completely out of touch with reality. The suggestion that I need to ask a biologist to write my biological text for me is equally ludicrous!

    You write your review paper, and I'll write mine.

  • Re: Candid Engineer: Your attitude that one must 100% understand the science in a manuscript as well as be aware of all subsequent responses to the paper, etc in order to simply cite it is ludicrous and completely out of touch with reality.

    Yah, I didn't say that. What I said is that if you're in a field you should at least know the relevant literature, especially if you're writing a review on said literature. Cold-citing an article? Piss poor.

    You write your review paper, and I’ll write mine.

    Let me know what review paper it is please, so I can avoid it like the plague.

  • DrugMonkey says:

    Candid, you have the relationship between rigor and Glamour 100% assbackwards. The Glamour approach is to jam so many different things together that nobody notices the individual experiments suck. Or maybe it is just that the behavior is always crap..and those retractions for the other stuff is a complete coincidence.

  • Doesn't appear to be possible to embed anymore, but I'm with TJ here. Seriously, if you don't understand the research you are citing, you're writing the wrong article. Put it this way: why should anyone read your article, when they're perfectly capable of being ignorant all on their lonesome?

  • In my field (and really, here, I can only speak for my field)- the most novel, the most universally relevant, and the most robust science is published in the Glamour Magz. These articles are *almost* always good reference material.

    I understand the drawbacks to these journals- that the flashiness sometimes can compromise the science. But in my field, this is typically not the case. It usually is the best science with the best, most rigorous controls (even if there are plenty of unnecessary controls jam-packed in there, too). Do you guys really have such bad experiences with the science in these journals? Perhaps it varies from discipline to discipline, I don't know.

    @gameswithwords- I think there is real value in writing a review article when you are not an expert in the field, as long as you are willing to do due diligence in investigating the science/background to the best of your ability. Six months into my postdoc, which I am doing in a field different than my Ph.D. work, my supservisor asked me to write a review for a high-profile journal.

    Having recently come into the field, I had a unique perspective. There were lots of things that confused me about the field that were not well-addressed in available review articles. There were a lot of questions that I felt I could address while writing my review article that might help to bring others a clarity that I did not have while coming into the field. I wrote the review article that I wished I had had when starting my postdoc.

    A year and a half after publication, my review is the second most well-cited review in the field. I have received many many compliments on it, both from newbies as well as senior investigators.

    I'm glad I didn't say "no" when my supervisor asked me to write the manuscript simply because I was not a true expert in the field. There is more to writing a review article than expertise. Sometimes a fresh perspective and an ability to distill out the real meat of a field in an accessible way is what you really need to do a good job. Because even a review article with the most ZOMG!!11!! correct references evah!!11!! is worthless if no one can understand it.

  • Oh, and DM, per usual Physioprof is kind of right. This whole comment nesting thing is a royal pain in the ass for extended conversations.

  • Candid Engineer: ... as long as you are willing to do due diligence in investigating the science/background to the best of your ability ...

    Nice to see you've backtracked on your previous "... do not have adequate time to do all of the fact-checking myself ..." comment.

    There are many reasons to write review articles, and one such instance is someone coming into the field and doing an exhaustive literature search and turning that effort into a review article. I have no problem with that. For the writer, it allows them to read the literature and take that time and effort and produce something readily accessible to others. Of course, that means you've become familiar with the literature, which was what my argument was all about.

    So, it's nice to know that you agree with me ... writing a review should mean that the writer does due diligence, ensuring that the story told is told properly. That actually means knowing the subject one is talking about, including the relevant reports in the field and how they're integral (or not) to advancements in said field.

  • TJ, you are being argumentative.

    "Nice to see you’ve backtracked on your previous “… do not have adequate time to do all of the fact-checking myself …” comment."

    I have not backtracked. There is a difference between doing all, i.e. exhaustive, fact-checking and investigating something to the best of your ability.

    In my case, when writing a review in a very short window of time (I had to do it in under a week), I did the work to the best of my ability, but I was not able to exhaustively fact-check. I have somehow managed to live with myself.

  • DrugMonkey says:

    the most novel, the most universally relevant, and the most robust science is published in the Glamour Magz.

    "most novel"? perhaps. But who cares? novelty means it may be pretty far wrong...until we get scientific replication and extension it is not much better than a good idea.

    "universally relevant". wtf is that supposed to mean? basic biological processes? so what? why should popularity and application across fields mean squat? to me all that means is that it is all that much further away from anything meaningful for public health. yes, we need basic research and findings but breadth is no measure of importance. That's gutter democracy voting for what is the "best" science. bullshit.

    "robust". yeah, not so much in any field I read in. The structural limits in terms of how much can actually go into the pubs make me very seriously doubt your assertion. Perhaps we differ on what it means to be "robust".

    Do you guys really have such bad experiences with the science in these journals?

    Yes. I have yet to be passed along a paper for help understanding the behavioral studies from a friend in a different field where the behavior would pass muster in a low IF journal which publishes such models with frequency. And we're not talking esoteric minutia of a navel inspecting type either. we're talking basic shit-not-making-logical-sense stuff. In my own papers I have plenty of examples where the supposedly higher journal review ignored or gave a pass to some methodological issues that lower IF journals would (or had in some cases) worked me over for.

    So when I see some paper being retracted or "corrected" for that duplicate band that has been "oh it was just a placeholder" inserted or, hell, just see the cut and paste jobs done with the supposed control lanes/conditions or the mysterious lack of error bars and stats....the list goes on. It is all highly consistent with my viewpoint that GlamourMag style jams so many disparate technical elements into a paper that the reviewers are really not competent to review the actual methods. They get concentrating on the "oooh, shiny!" of the whole package and forget that it is built on at least one unsteady leg. This is, of course, aided and abetted by people that think that just because there is a citation to a method in another GlamorMag, this must mean all is kewlio.

    Medium IF journal papers may be boring, pedestrian, brick building kinds of things but they are not, to anywhere near the same extent, such disasters from a standpoint of presenting solid scientific evidence.

    The utter prioritization and selfcongratulation of GlamourMag approaches are destroying science.

  • I have not backtracked. There is a difference between doing all, i.e. exhaustive, fact-checking and investigating something to the best of your ability.

    Just curious but ... where does due diligence come into play in all of this?

  • [...] [Read more from DrugMonkey @ Scientopia] [...]

  • Hi there

    cheap christian louboutin

  • sbobet says:

    Please let me know if you're looking for a article writer for your weblog.

    You have some really great posts and I feel I would
    be a good asset. If you ever want to take some of
    the load off, I'd love to write some material for your
    blog in exchange for a link back to mine. Please
    shoot me an e-mail if interested. Regards!

Leave a Reply