GlamourMag Science: Homey don' play that!

Apr 11 2013 Published by under #FWDAOTI, Academics

Since I know many of my readers are comparative children who may have missed the legendary sketch comedy show....

Now.

There's some Twittage today about the Glamour Science situation and what we (meaning the relatively established professoriat) are doing to back up our fine criticisms. Particularly in the face of younger and transitioning scientists who realize that they need to play the GlamourChase game as hard as they can if they expect to make it.

Personally, I don't think we need some overt revolution of radical shunning of anything having to do with high Impact Factor journals to have a substantial effect. Refusing to play the game has its advantages. I ran off a couple of quick Twitts having to do with choices we can make.

First, never let data go unpublished for lack of impact.
To me the absolutely most corrosive part of GlamourIdiot science is that lots and lots of perfectly fine data go unpublished. Forever. This is for several reasons including the fact that at least 5 person years of work go into the CNS paper and even with ridiculous amounts of Supplementary Figures only a fraction gets into press. There's a lot of dross that nobody wants to see, sure, but there's also a lot of stuff that would help other people out. Save them some blind alleys if nothing else. (Did we mention this is being done on the federal taxpayer dime? And that grant dollars are scarce? wouldn't the NIH want most of the work they payed for made available...?) Then there's the scoopage factor- if someone else gets there first it automatically downgrades your work...so the GlamourDouche lab goes in another direction to try to salvage another high-profile publication. So there's another bunch of figures trashed. Figures that save for the scooping would have been in the same damn high IF journal! Jesus this is INSANE, right? yeah, well, welcome to GlamourScience. Then we have projects that just aren't cool enough in terms of the result. Some PIs simply won't let their labs publish it for fear of diminishing the aggregate lab JIF level. Again...crazy, right? Why the hell does a PI with 5 CNS papers a year give a flying fig if a postdoc sneaks out a IF 5 paper? There's an instructional part here for postdocs- some of this lack of publication is your own damn fault. Yes, you who have drunken the FlavorAde participate in this too. Why? Because you don't force the PI to see sense. For one thing, let me tell you the hard hearted PI's heart tends to soften when an essentially ready-to-submit manuscript crosses her desk with a clear rationale for why it is okay (and necessary) to publish the data and why this particular journal is perfect, save for the IF. Don't be afraid to play on her scoop fears now... "We gotta get this in somewhere, I hear Postdoc Lin has her story ready to go in our competitor lab!". Some mentors will be susceptible to the "I need X first author pubs to get a shot at a job and I already have the two CNS papers so...." argument.

Second, never ever decide what to cite based on JIF.
Ever. It's hard. I know. You are steeped in turning first to the big papers in high reputation single-word-title journals. This is unnecessary you know. Cite the right paper that makes the right point for which you are citing it.

Third, if you can't cite first/best/recent...go with best over first
I tend to, all else equal, go with a citation strategy that pays homage to the first paper for a given point, the best one and then maybe a recent one to show the continuation of the theme, topicality, etc. The best is rarely ever the GlamourMag one although when you get down to the sub 10IF level in my fields then you might see a bit of a correlation. The first observation, especially if it is coolio stuff, tends to have been in a Glamour Mag which is why I make the point. But hey, if it isn't, cite the first one. Give some cred to the overlooked person who published a finding 10 years before some big lab jumped all over it.

Fourth- review manuscripts on your principles. Get your peers into high IF journals
You know what they want to hear, those GlamourEditors. Impact, importance and eleventy six kinds of pizazz. Write your reviews accordingly to get your peers' solid, if not really Glamourous stuff into those journals. Destablize the system from within. Just be subtle about it or the Associate Editors will no longer send you stuff to review.

36 responses so far

  • Beaker says:

    Citing the first big discovery works at first, but if the lab continues to work in the same field, why not reference their current more state-or-the-art science, not their past glories, which will always be cited in the more recent paper? Citing the best is easy if the best study appeared after the first one. You cite both.

    If a different (small grocer splitter-type) make a similar discovery (published in a sub-4 impact journal), often with priority, it will be ignored.

    If a different investigator has a starkly apposing view, their paper will get cited more as a show of balance.

  • GM says:

    I am observing the devastation that the obsession with CNS publications can wreak first hand at the moment (not in my lab but someone I know) - perfectly good data that demonstrates something that everyone has thought to be true but has not been directly shown, which makes it overall too boring to be published in a glamour mag as nothing surprising and major came out of it in addition to that. And for that reason it will be a big fight between trainees and PI to get it published at all in a not so glamour journal. I simply can't understand that...

  • mrs.estepp says:

    One of my projects my PI said, this could be CNS. I froze. Something in my mind says you'll never reach those heights again and you'll have peaked so early in your career. Maybe I can't handle the pressure in general.....

    As far as citing an article in regards to JIF, I would never do that. It all depends which lab showed the evidence first. If it's been extensively reviewed, (first shown by XXX and extensively reviewed by XXY).

  • namnezia says:

    Publishing in fancy journals does not preclude also publishing in not so fancy ones. I'm not sure why you think this is exclusive of each other.

    As far as your last point, I never take impact into account when reviewing for fancy journals and in fact don't review any differently than for other journals. This might explain why AE's stopped sending me papers for review...

  • Dave says:

    One of my projects my PI said, this could be CNS.

    If I had a penny for every time I had heard that........

  • AcademicLurker says:

    The idea of considering IF when deciding which papers to cite is just bizarre. Do people actually do this?

  • mikka says:

    Another thing that could help is to remove limits for citation numbers. This is an inheritance from the days of print journals, and should no longer apply. When faced with this, people tend to leave only (1) Reviews references (because they allow you to say things like Foobar et al and references therein, compressing several into one and explaining the ridiculous IFs review journals have), and (2)CNS references in the, probably well founded, belief that the reviewers will see them and be impressed into thinking that the subject matter that the paper deals with is very relevant.

    Remove limits and those skews should at least be alleviated. Scientific papers already look like David Foster Wallace essays anyway.

    Also, do you realize that we, and the publishing houses (McMillan, Elsevier), are making decisions based on a statistic that is horribly flawed? An arithmetic mean for a distribution that looks nothing like a normal curve? It's unbelievable that the same people that demand strict statistical methods to come to conclusions will blindly follow such an absolutely bolloks number.

  • Chemists tend to be more impressed with a nice JACS or two-- a "general audience, society run flagship journal" for the field will get you a pretty nice job as an industry scientist.

    As my analytical chemistry professor said, "With Nature and Science, always wait three months for the retraction/follow up article".

    General public mags like Nature certainly have a place, but I was taught that you should spin the idea around in a society journal before doing the broad audience ones.

  • DrugMonkey says:

    Namnezia-
    I suggest phrasing the review the way that Editor wants to hear it. Review is supposed to be a communication first and foremost to the handling Editor(s). You are advising them, not the authors. This does not mean that you align your judgement with the Evil Forces of Glamour. Just that you talk their language.

    AL- yes, lots of people cite by JIF....explicitly and implicitly. The latter is a consequence of *reading* the literature by JIF. We had a convo about Journal Club selections awhile back that related to this....

  • AcademicLurker says:

    We had a convo about Journal Club selections awhile back that related to this....

    By far the dullest journal club at my old institution was the one that had a formal Science-or-Nature-only policy.

  • Dee says:

    Aren't scientists the ones who actually encourage this attitude by 1. Not wanting to publish in non-Glamour Mags 2. Citing glamour mags which in turn raises their impact factor and continues the vicious cycle 3. Sit on the review panels of all journals and so directly control what gets published to begin with

  • gri says:

    Unless you are at University which requires for promotions to have at least one Nature like paper per year...

  • DrugMonkey says:

    Only a subset Dee.

    And where might that be gri? Sounds like a recipe for serial fraud to me.

  • dsks says:

    "Unless you are at University which requires for promotions to have at least one Nature like paper per year..."

    'Struth. That's just bloody stupid.

    In an ideal world, investigators would be evaluated based on the significance of the questions that they ask and their demonstrated ability to investigate the answers to those questions meaningfully and thoroughly. This is something they can control. The actual answer to these questions is unfortunately largely down to the whims of nature, and is something an investigator cannot, or at least should not, be able to control (unless through fraud).

    Certainly, there are a couple of questions for which any answer would be high impact, but for the vast majority of important questions there is only one or two potentially high impact answers along with a much greater number of potentially intermediate and low impact answers. It's a disastrous state of affairs that we encourage scientists to avoid good questions simply because the likelihood of a high impact answer is too low, but there it is.

  • zb says:

    "Publishing in fancy journals does not preclude also publishing in not so fancy ones. I'm not sure why you think this is exclusive of each other."

    Yes, and it used to be like that, in the time of, for example, Hubel & Wiesel. As a quick and dirty example, using a generational trajectory, Hubel, 16/103 pubs, 15%; Livingstone, 20/74 pubs, 27%; Tsao, 11/20, 55%.

    It's a quick & dirty analysis, so one could quibble (for example, one might have a higher ratio of glamor mags early, though that guess doesn't explain the differences in this example). I'd *like* glamor mags, if they were really publishing the "hot" (novel, exciting, general interest, groundbreaking) work of each lab. The problem is when glamor mag becomes glamor lab, with the expectation that all the work coming out of a particular lab is glamorous. There will be different ratios of glamor in different labs, because of who comes and who has access to resources, but if there's only glamor work, it's because some work is being suppressed, delayed, massaged to make it glamorous.

  • drugmonkey says:

    In an ideal world, investigators would be evaluated based on the significance of the questions that they ask
    wrong. you DO know this is academics, right? what is the significance of post-modern critical theory again? oh right. ditto the function of gertzin signalling in the Physio-whimple nucleus during bunny hopping.

    and their demonstrated ability to investigate the answers to those questions meaningfully and thoroughly.
    This, although "thoroughly" is decidedly old fashioned and varies tremendously in the eyes of the beholder.

  • DJMH says:

    Alright, in defense of using IF to pick cites, I am currently reading a portion of the literature that is pretty far afield from my standard areas. I am just trying to get a toe-hold of any sort. I don't even know what some of the terms being thrown around in these papers mean, and the experiments themselves (molecular genetic immuno developmental transgenic stuff) make my eyeballs hurt.

    So, how do I pick which papers to read? I use pubmed, but beyond looking at titles and abstracts, hell yeah I'm preferentially going to pick up the Cell paper than the J Neurobiology paper. I cannot read all of them, since this isn't ever going to be my field (I'm just reviewing some literature so I can write the "implications" discussion section), and since I can't read them all to decide which ones are useful, I'm going to read the ones that made the biggest splash at the time. Sue me.

  • drugmonkey says:

    I don't need to sue you, I can just think you are a shitty scientist*.

    Why are you even discussing findings or proposing "implications" on the basis of stuff you don't understand and can't be bothered to understand?

    __
    *ftr peanute gallerye, I do know who DJMH is and I have the opposite opinion. quite a fine scientist indeed. (just deluded on the Glamour issue and the complete story issue, is all)

  • drugmonkey says:

    zb- anyone can see this reality by going back to Science of the 70s and pulling up stuff related to their own field. The findings were brief reports, one or two figure affairs, for which you can then see a long cascading explosion of meatier, substantial work in the aftermath...including the "real" paper from the originating laboratory. None of this "supplementary materials" crap. That was expected to appear in the real paper in a real journal.

    We need to return to this style.

  • DJMH says:

    Eh, I understand well enough for these purposes. Besides, no one reads discussions anyhow, right?

    In any case, I still think that although glamor mags distort the publishing world, the fact remains that none of us can be expert in everything, and as such we need guideposts. And I'm not going to go digging around in a crap journal in the hopes of finding a panel of relevance, citation of which will make me feel morally superior to citation of the Nature paper that came out two years later and did the experiments more cleanly, comprehensively, and persuasively.

    Because really, although there's crap that gets retracted from Nature, a lot of the stuff they publish does move the field forward. Sorry if it hurts to hear it.

  • DrugMonkey says:

    Why would it "hurt" ?

  • gri says:

    Not saying its justified, I think it is as stupid as the next guy. But still, if the promotion committee is dominated by the "star" scientists who just need to put their name on a paper and it flies at least into review in those Journals and you are coming with total reasonable and very good papers for your own field (IF in the high single digits) into that committee they usually want to see their standard. Not speaking out of experience but out of fear I guess.

  • zb says:

    "None of this "supplementary materials" crap. "

    But part of that change is the online publishing. Two page papers made sense in the old days, when it was actually paper. Now, it really does make sense for people to have access to info that they didn't when it was nigh impossible (for example, a video clip of the actual stimuli used in the hubel & wiesel experiments.

    I'm not pro-glamor mag with cascading supplementary data & figures. On the other hand, I do think that we need to think harder about what publishing means in the new world where data can be presented in so many ways. I've been wading into data visualization sites recently. The one that sticks to my mind right now is the Slate visualization on gun deaths in the US, linked to original data sources. That "graph" is a powerful depiction of the data, that further allows the user to consider their own hypotheses and to verify the accuracy of information in the summarized data set. I think we do need to move in that direction in science, while still keeping the value of peer review and scientific standards.

  • GM says:

    zb April 12, 2013 at 12:47 pm
    "None of this "supplementary materials" crap. "

    But part of that change is the online publishing. Two page papers made sense in the old days, when it was actually paper. Now, it really does make sense for people to have access to info that they didn't when it was nigh impossible (for example, a video clip of the actual stimuli used in the hubel & wiesel experiments

    That is true, but what has also happened is that these days it is nigh impossible to write an honest paper that actually included all the relevant details that people will need to know. Because that would make the paper "too technical" for the average reader, or revealing some technical problems about what you did will decrease the strength of your claims and make it more difficult for you to get past the reviewers. None of this makes for a better scientific communication.

    One would have thought the expansion of the supplement would have mitigated those trends, but it hasn't, instead it has been a largely orthogonal development

  • anonymous postdoc says:

    "That is true, but what has also happened is that these days it is nigh impossible to write an honest paper that actually included all the relevant details that people will need to know. Because that would make the paper "too technical" for the average reader, or revealing some technical problems about what you did will decrease the strength of your claims and make it more difficult for you to get past the reviewers. None of this makes for a better scientific communication."

    I just wanted to say that GM is describing my writing life right now. Truth. The thing that makes me crazy is every experiment in the world has technical problems, but we are all asked to pretend otherwise in the hopes of not awaking the sleeping dragon in the reviewer. This promotes dishonesty of all stripes.

    Regarding the original post, I was formerly the member of an entire subfield which has been largely excluded from glamour publishing, to the point that it became a bitching point at the subfield meeting several years ago. The thing is, it's not like the subfield sucks or is unimportant. I could prove my point, but that would reveal too much. But I knew if I wanted a shot I had to develop skills elsewhere, since my people were not going to be able to help me as much as I needed help in the new world order. I actually do still consider myself a member of this subfield but I am branching out like cray-cray.

    Basically, glamour mags dictate not only which labs get attention but which areas of research, full stop. If there is a heavy Nature focus on the neurobiology of bunny hopping, then the neurobiology of bunny nose-twitching only gets in to the extent it happens to impact hopping.

  • DrugMonkey says:

    Yes the categorical fadism that means some work will never be acceptable and other stuff will be more likely on technical grounds alone is a big part of the problem.

  • GM says:

    BTW, what I said in that post is not even the worst or most perverted effect the system has on scientific integrity. That honor goes to the "last experiment" that a reviewer requests in order for the paper to become sexy enough to be published in the glamour mag; that last experiment is usually added, with the expected result and with a p-value of 0.04, becomes a major claim of the paper and then, of course, cannot be reproduced. In effect, instead of protecting the integrity of the literature, reviewers have become an active force pushing it in the other direction, by trying to force authors to make claims that their data does not support in order to push up the "impact" of the work so that it can be published in the glamour journal.

    Of course, the authors are to blame too, because almost invariably, they are aiming too high on the impact factor ladder. I never understood why anyone thinks it's a good idea to send something to a journal that it has 10% chance to get into, and spending countless hours trying to write it in such a way that that chance may go up to perhaps 15%, then waste months waiting for reviews, arguing back-and-forth with reviewers, doing additional experiments, etc., an enormous amount of time that would have been much better spent accepting that not everything has to be in Nature, submitting it to the highest IF journal that will take it with reasonable certainty and without too much disputes with reviewers, and then focusing on working on and writing the next paper.

  • qaz says:

    DJMH - If you are trying to learn a new field, go find the big reviews. This is one of the things that reviews are for. They put the field into perspective with definitions of confusing terms and the correct (hopefully) references. Reading the C/N/S GlamourLiterature as an attempted entry into a new field is a waste of time. The purpose of a C/N/S paper is very different from providing an introduction to a new field. Why would you start with C/N/S for a "toe-hold"? If you are at the "toe-hold" stage, go read reviews. Once you are ready to dive in, then start looking at the primary experimental literature. (And you'll be ready to be able to judge paper quality without having to fall back on GlamourFactor.)

  • The Other Dave says:

    Sorry DM, but your post reveals a terrible lack of understanding. When did you buy in to the whole hierarchy/IF thing?

    These are all just magazines. Don't send your manuscript on vacation tips for Barcelona to Model Railroader Monthly. Science and Nature publish fundamentally different things than 'specialized' journals. Science and Nature are trendy bathroom reading full of articles that will interest people outside the field. You could never learn a field by reading only those journals. In contrast, you could actually understand a field reading more specialized journals.

  • DJMH says:

    Qaz, I am reading reviews. And guess what they point me to? High profile papers. Pretty much all of the cites that are relevant for my purposes are Cell, Nature, Neuron, J neurosci. So, that's what I read. And I don't spend time angsting about whether I missed something in J fuckall.

  • The Other Dave says:

    DJMH...

    Have you ever written a review? In a review, you explain the way things are and maybe a few significant turning points or controversies. You cite the significant turning points and controversies, which tend to be in high profile journals. But 'the way things are' part? That foundation is laid down in the more specialized journals. You can't cite them all, so you basically just lay them down as fact.

    Thats WHY we need reviews... explain all the perspective scattered in a zillion little specialized journals. If it were all just a matter of reading a couple Cell and Nature papers, no one would need reviews.

  • [...] in a position of privilege if you’re saying that. I like Drugmonkey’s attitude, to subvert the system by being entirely reasonable. Among these reasonable ideas: don’t cite glamour mags unnecessarily; don’t not publish [...]

  • [...] aim for Nature, Science, or PNAS). DrugMonkey has been heavily involved in these discussions, and wrote this blog post in response to one of those discussions. His first point is “never let data go unpublished for lack of impact.” That seems reasonable. [...]

  • […] one’s career (get a tenure-track position or get tenure) and  and the idea of boycotting or subverting such journals and going […]

Leave a Reply