Ruining scholarship, one bad mentor at a time

via comment from A Salty Scientist:

Gnosis:

When you search for papers on PubMed, it usually gives the results in chronological order so many new but irrelevant papers are on the top. When you search papers on Google Scholar, it usually gives results ranked by citations, so will miss the newest exciting finding. Students in my lab recently made a very simple but useful tool Gnosis. It ranks all the PubMed hits by (Impact Factor of the journal + Year), so you get the newest and most important papers first.

Emphasis added, as if I need to. You see, relevant and important papers are indexed by the journal impact factor. Of course.

25 responses so far

  • OlympiasEpiriot says:

    Applied Newtonian Scientist here...which =engineer, and, obviously, as I'm in consulting, I'm not publishing in anything on PubMed and, honestly, we're probably all confused as to why I read you; but, help me understand...

    1) Is a journal impact factor an agreed-upon constant or just some "highly-respected guru's opinion"?
    2) When I type into a search engine, if I get actually irrelevant results I assume I made a mistake in my search terms. Does "irrelevant" mean something different in your field?
    3) If lots of people clicked on a particular item in their searches (with the non-Gnosis search engine) and those "hits" get counted, aren't the *cough* non-useful searches influencing the Gnosis algorithm?
    4) Should I be embarrassed for asking these questions?

  • Mikka says:

    I've been kicking the tires on this thing. It is so outrageously biased towards CNS that the results are practically useless.

  • OlympiasEpiriot says:

    ps: I meant to write "an agreed-upon constant value for that journal", not to imply there was some C.

  • dr24hours says:

    Olympias:

    1) It is a calculation performed by Thomson Reuters shrouded in mystery, but basically based on "How often do papers in Journal X get cited quickly?"

    2) Search terms are almost always too broad to narrow modern search algorithms sufficiently.

    3) "Clicked on" is not the same as "cited" in this context, and should not influence these results significantly.

    4) No.

  • SidVic says:

    We should go further; Establish a committee to decide what journals are genuinely "peer- reviewed" and exclude all the others from the databases. Filters are just to easy to defeat. I strongly urge you to read the University of East Anglia emails if you doubt that a proposal of this sort of proposal would not find supporters in some quarters.

  • A Salty Scientist says:

    This is why those in power need to speak up on Study Section and beyond. I'm just trying to survive pre-tenure at a flyover barely MRU. I'm lucky to start a discussion anonymously on DM. Dr. Liu the chair of GCAT.

  • Dave says:

    Let's hope she can keep her glam humping under wraps in SS. Seems to offer some good insights here:

    http://www.longwoodgenomics.org/2015/10/29/lessons-from-gcat/

  • A Salty Scientist says:

    @Dave: incidentally, that's how I originally found her blog. She makes very good points, and I have found her blog useful in other regards. It was perhaps a bit unfair of me to start this shitstorm. Nobody is perfect, and we all have both explicit and implicit biases. This speaks to the need for diversity on review panels and people specifically pushing back against glamour humping.

  • Mikka says:

    To be fair she didn't build this abomination, her students did. I assume her students are the ones in thrall to glam (as they should be, after all that's what they can see working for career advancement. Whose fault is that? Not theirs).

  • Ola says:

    As Dr24 opines, if your PubMed searches are giving too many results, you're doing it wrong. Let's say you need a paper on bunny-hopping from some fart at Major Research University whose last name begins Smith or something and was published in the '90s, or maybe not? Your PubMed search had darn-well better be: bunnyhop* [TI] AND city where MRU is located AND Smi* [AU] AND 1988-2002 [PDAT]. Unless of course you enjoy the serendipity of wading through results and finding stuff you didn't know about.

    But yeah, filtering by IF is just dumb.

    It is especially stupid when you consider this tool is probably just pulling the most recent year's IF for the journal in question (since pulling the IF at the time the paper was published is probably too data intensive and would require having a back-end database of IFs for every year). IF's change, and a lot of previously "good" journals now have lower IFs than in the past (cough JBC cough), so if my search for a paper from the 90s gets put lower down the list because their IF is crap now, that's not a cool feature.

  • SidVic says:

    I really adore google scholar. It encompasses the entire text of articles rather than just the abstract and keywords. You can find anything from conceptual info to specific methods fast, especially if you become skilled at using operations and "". "X media was prepared" ... " induces the expression of RAB2" If you do not regularly use it, do yourself a favor and spend 15 minutes playing with it (i know that everybody here is prolly is a aficionado but i explicitly teach student the tricks all the timee)). That is when i do the majority of my literature searchs: when i have a result and i am trying to figure out what it means/has been published before.

  • becca says:

    "Let's say you need a paper on bunny-hopping from some fart at Major Research University whose last name begins Smith or something and was published in the '90s, or maybe not? "
    Who the heck uses PubMed this way?

  • The Other Dave says:

    Ha ha. Actually reading papers and thinking about them is so old-fashioned. If you agree with the title, cite it!

  • drugmonkey says:

    "In thrall to Glam" is very well coined Mikka. Top marks.

  • Grumble says:

    How do I get my lab router to ban access to Gnosis?

  • Curiosity says:

    I wonder if the irony of titling a glam filter 'gnosis' is lost on the lil ones.

  • Cynric says:

    Ha ha. Actually reading papers and thinking about them is so old-fashioned. If you agree with the title, cite it!

    And if it's published by one of the BSDs that you need to mollify, the title only needs to be basically relevant.

  • jmz4 says:

    @Curiosity some people definitely consider their Nature papers to be divine insight.

    Google scholar with refined search terms is the way to go, it seems. I'd personally like the ability to limit results to an index of selected journals. But I've definitely encountered situation where slogging through the 10th page of pubmed results had yielded some diamonds in the rough.

    I'm sensing a divide between younger vs older folks. The tenured folks seem to use pubmed primarily to track down papers they've read previously. I personally use it to source for confirmatory evidence on some idea, or see if it's been done before. Occasionally for methods.

  • The Other Dave says:

    "The tenured folks seem to use pubmed primarily to track down papers they've read previously."

    Well... yea. We're all supposed to be keeping up with the literature in our field. Those of us who are doing so will have read all that stuff previously.

    Everyone here should have weekly Pubmed alerts for things in their field, and Google Scholar profiles with automatic Google Scholar 'My updates'.

    "I personally use it to source for confirmatory evidence on some idea, or see if it's been done before. Occasionally for methods."

    CONFIRMATORY evidence?! That's not how science works!
    You do not skim the literature looking for stuff that agrees with you.

    You need to look specifically for the stuff that contradicts, what you think. Maybe you can find a caveat or flaw in it. That's fine. But if you cannot, then you need to think very hard and likely modify your hypothesis. That's how we figure things out.

    Remember that amongst all the whoring for grant money and glam journal humping and ass kissing for promotion, we are still supposed to be figuring out how the universe works. Please lets not forget the science in the science careerism.

  • drugmonkey says:

    You do not skim the literature looking for stuff that agrees with you.

    Oh, Lordy, this. THIS.

  • Established PI says:

    I'm a tenured baby boomer and I use both. Pubmed is great for finding all the latest papers in my area - *wherever* they are published (and, seriously, I am looking for new information, not confirmation of old stuff). Google is awesome for its full-text search and ability to mine the literature for nuggets of information that didn't make it into the abstract but are highly relevant to our current work. Multiple alerts are essential. Now I just need to figure out how to set up alerts for preprint archives.

  • OlympiasEpiriot says:

    "CONFIRMATORY evidence?! That's not how science works!
    You do not skim the literature looking for stuff that agrees with you.

    You need to look specifically for the stuff that contradicts, what you think. Maybe you can find a caveat or flaw in it. That's fine. But if you cannot, then you need to think very hard and likely modify your hypothesis. That's how we figure things out."

    Oh hell yeah! Even in my line of work -- in consulting, for profit. Even with testing rock strength for my geotechnical reports. I'm looking for the weirdness, the stuff that doesn't fit. If I'm wrong and just going with the flow or what confirms my biases, I might have a real big practical and expensive problem down the road.

    I think I read this to remind myself that it isn't all sunshine and roses back in The Groves of Academe...frequently I fantasize about returning for a PhD. You all are the antidote for that.

  • jmz4 says:

    "CONFIRMATORY evidence?! That's not how science works!
    You do not skim the literature looking for stuff that agrees with you. "
    -I'm not sure if we're talking about the same stage of investigation, or if your definition of confirmatory is different than mine. Maybe corroborating is a better term for what I meant. Otherwise you seem to be suggesting that the beautiful, complete hypothesis blooms into your head fully formed and ready to be tested by the bright eyed trainee. That might be how we write it in papers, but its BS and you know it.

    In reality, you (or more likely your trainee) have an idea (or 10), and then you look for corroboration in the literature. E.g. I have a RNA-seq data set that leads me to think gene A,F,E or Q interacts with gene Z or M to increase risk to Alzheimer's disease. I find out F and E have already been shown. For A and Q no direct evidence in the lit. Cool, novel ideas.

    Either way I look for confirmation that my ideas might be right.
    So, I find a GWAS for AD listing gene Q as a candidate. Cool, I can figure out a mechanism. Find another paper showing gene Q interacts with gene D, which interacts with gene X, and another showing gene X interacts with Gene Z. This allows me to predict Q->D->X->Z, which I can then test.

    PIs get to think of hypotheses in grand, big terms. I.e. "I hypothesize Alzheimer's is caused by hyperactivation of neurons." PDs have to think in more concise, testable terms, and in order to pick the experiments that have the best chance of panning out, or which can be easily falsified. E.g. "I think this gene may form a complex with this gene, which downregulates synaptic transmission. Maybe its broken in AD"

    If I fail to find any evidence this idea is right (e.g. A-Z in my example above), I put it in the folder marked "for when you have your own lab."

  • The Other Dave says:

    jmz4: Remember that all of us PIs were once successful postdocs. Just like you were a successful grad student, or you wouldn't be a postdoc. So we don't think so differently as you might suspect.

    I get what you're saying, and I think a lot of people do like you describe. Unfortunately, I think it is a poor way to figure things out. You're failing to account for any false positive rate in your interactions. Even if the false positive rate (false interaction rate, in your case) is very small, you will always be able to build a 'pathway' (albeit a false one) out of those apparent interactions if you search long enough.

    Maybe an analogy will help. Let's say that no matter who you ask about anything, they will tell you something completely wrong every once in a while (like any lab assay). So I start asking people what happened in Brussels. One guys says some terrorists blew themselves up. Another guys says that the terrorists rode in on elephants. Another guy says one of the explosions happened at a train station. Another guy says that the terrorists were wearing red hats. You note that no one has given you any information about whether the terrorists were wearing shoes. So you ask about shoes, specifically. Aha! Three people confirm that the terrorists were wearing shoes.

    So you publish your paper saying that two terrorists wearing red hats and shoes rode in to at least one train station in Brussels on an elephants and blew themselves up.

    Obviously that's not so good. In contrast, if you start with the whole hypothesis, as described above, and then methodically test each part, you would quickly find that the the hypothesis as a whole was false. You could then modify the hypothesis (maybe there weren't any elephants involved...) and re-test it.

    So... yea.... maybe that "complete hypothesis fully bloomed" that some of us describe in papers isn't such a lie after all. You're just forgetting that we burned through a lot of hypotheses that turned out to be false before then. But that's why we have labs and postdocs... to help us burn through those hypotheses until we find one that seems to hold true.

  • SidVic says:

    Wow, i am impressed with the discussion. My approach is somewhat different, i think?, than most. I screw around and try stuff out, almost randomly; totally curiosity driven. This is the benefit of curious students because they don't know enuf to understand that they are trying stupid stuff. Eventually you find a phenomena. the cells turn black or are protected by x from z.
    Then i blitz the literature, very skillfully I assure you, and talk with ppl knowledgeable in this area if they are available to determine if the answer is already out there. My bias is that at least a clue is available if you dig deep enough. Certainly if you examine the history of solved problems, in retrospect, clues abound. Many times it can be explained, if so, it is dropped like it's hot. If not then i have a new obsession. Of course this approach is totally inefficient. In my career of 20 yrs i have solved 3 problems, ahem, one of which i blew the answer.
    When I was graduate student i remember telling someone that "nobody has ever tried this!" they replied "maybe there is a reason for that..." It took me along time to understand that the majority think this way, and with good reason.

Leave a Reply