Drugmonkey http://drugmonkey.scientopia.org Grants, research and drugs Mon, 25 Jul 2016 23:09:35 +0000 en-US hourly 1 Political observation http://drugmonkey.scientopia.org/2016/07/25/political-observation/ http://drugmonkey.scientopia.org/2016/07/25/political-observation/#comments Mon, 25 Jul 2016 23:07:14 +0000 http://drugmonkey.scientopia.org/?p=9197 When pressed, the more mainstream supporters of Donald Trump in the Republican party insist that they believe that Trump does not really mean the full import of his wildest statements. He doesn't really plan to block all Muslims from entering the country, he doesn't really mean to deport all undocumented immigrants, he doesn't really mean to.... etc. So, as I understand their thought process, it is okay to support his candidacy and this doesn't mean that you support all that crazy stuff.

Interestingly, these self-same people have a burning hatred (or at least a profound irrevocable mistrust) of Hillary Clinton because they believe that she doesn't really mean what she says during the campaign or in her prior political activities. They are positively obsessed with conspiracy-level accusations about her alleged insincerity, dis-ingenuity and secret machinations. And completely and utterly unable to take her policy statements, and descriptions of her reasons for her prior actions, at face value. And to be clear, it is not just that they criticize her actions. They are worked up to an absolute frenzy about their beliefs about her alleged insincerity, far more than they are about the actual policies or actions.

It's fascinating. On the one hand Republicans support Trump because they believe he is a liar. On the other hand, they absolutely hate Clinton because they believe that she is a liar.

]]>
http://drugmonkey.scientopia.org/2016/07/25/political-observation/feed/ 42
I oppose H8 http://drugmonkey.scientopia.org/2016/07/22/h8/ http://drugmonkey.scientopia.org/2016/07/22/h8/#comments Fri, 22 Jul 2016 23:13:54 +0000 http://drugmonkey.scientopia.org/?p=9191 Sometimes just shaking your head isn't enough.

One of the things that I've believed is the very essence of the American Dream is the aspiration to own your own little home, live in a nice neighborhood, raise your kids as best you can and live happily ever after with your spouse.

NBC Philadelphia reported on the American Dream of one couple as it is right now in 2016.

The couple said they bought the house in 2014 and moved there for a fresh start — a place where their boys, now ages 8 and 13, could play in the yard with the four family dogs and leave behind the hurt of their biological parents’ struggles with drugs and crime.

But, the pair said, they found only discrimination and hate. First, they said, in the form of the frivolous lawsuit, and later during a months-long campaign of repeated vandalism to their home that included someone using the cover of night to scrawl the slur onto their garage, breaking their security sensors on numerous occasions and twice taking a hacksaw to the white fence that supposedly sparked it all.

There's a gofundme set up to pay this family's legal bills.

I'm going to suggest that this is a great opportunity to give even as little as $5 just to register a vote of protest against hatred and in support of decency. Or as the lighting of a candle in the darkness.

Or maybe, as I do, you think that this could easily be you, your family or the family of people that are really close to you. The specifics may vary. Maybe it is not your sexual orientation but the color of your skin. Maybe it is your religion or the clothes you choose to wear. Maybe it is your chosen profession or perhaps a health or ability condition. Whatever it may be, you might be unlucky enough to end up in a neighborhood with people who hate you for what you are, not for who you are. And some of these sick individuals may be feel it is perfectly acceptable to persecute you because of their hatred.

And if so perhaps you hope, as I do, that if you had uncharitable neighbors like these poor people do, that the rest of the country would rise up and register a small vote of support for you.

]]>
http://drugmonkey.scientopia.org/2016/07/22/h8/feed/ 6
Personal jihads and distinguishing better/worse science from wrong science http://drugmonkey.scientopia.org/2016/07/22/personal-jihads-and-distinguishing-betterworse-science-from-wrong-science/ http://drugmonkey.scientopia.org/2016/07/22/personal-jihads-and-distinguishing-betterworse-science-from-wrong-science/#comments Fri, 22 Jul 2016 20:33:56 +0000 http://drugmonkey.scientopia.org/?p=9185 This is relevant to posts by a Russ Poldrack flagellating himself for apparent methodological lapses in fMRI analysis.

The fun issue is as summarized in the recent post:

Student [commenter on original post] is exactly right that I have been a coauthor on papers using methods or reporting standards that I now publicly claim to be inappropriate. S/he is also right that my career has benefited substantially from papers published in high profile journals prior using these methods that I now claim to inappropriate. ... I am in agreement that some of my papers in the past used methods or standards that we would now find problematic...I also appreciate Student's frustration with the fact that someone like myself can become prominent doing studies that are seemingly lacking according to today's standards, but then criticize the field for doing the same thing.

I made a few comments on the Twitts to the effect that this is starting to smell of odious ladder pulling behavior.

One key point from the original post:

I would note that points 2-4 were basically standard practice in fMRI analysis 10 years ago (and still crop up fairly often today).

And now let us review the original critiques to which he is referring:

  • There was no dyslexic control group; thus, we don't know whether any improvements over time were specific to the treatment, or would have occurred with a control treatment or even without any treatment.
  • The brain imaging data were thresholded using an uncorrected threshold.
  • One of the main conclusions (the "normalization" of activation following training") is not supported by the necessary interaction statistic, but rather by a visual comparison of maps.
  • The correlation between changes in language scores and activation was reported for only one of the many measures, and it appeared to have been driven by outliers.

As I have mentioned on more than one occasion I am one that finds value in the humblest papers and in the single reported experiment. Often times it is such tiny, tiny threads of evidence that helps our science and the absence of any information on something whatever that hinders us.

I find myself mostly able to determine whether the proper controls were used. More importantly, I find myself more swayed by the strength of the data and the experiment presented than I am by the claims made in the Abstract or Discussion about the meaning of the reported work. I'd rather be in a state of "huh, maybe this thing might be true (or false), pending these additional controls that need to be done" then a state of "dammit, why is there no information whatsoever on this thing I want to know about right now".

Yes, absolutely, I think that there are scientific standards that should be generally adhered to. I think the PSY105: Experimental Design (or similar) principles regarding the perfect experiment should be taken seriously....as aspirations.

But I think the notion that you "can't publish that" because of some failure to attain the Gold Plated Aspiration of experimental design is stupid and harmful to science as a hard and fast rule. Everything, but everything, should be reviewed by the peers considering a manuscript for publication intelligently and thoughtfully. In essence, taken on it's merits. This is much as I take any published data on their own merits when deciding what I think they mean.

This is particularly the case when we start to think about the implications for career arcs and the limited resources that affect our business.

It is axiomatic that not everyone has the same interests, approaches and contingencies that affect their publication practices. This is a good thing, btw. In diversity there is strength. We've talked most recently around these parts about LPU incrementalism versus complete stories. We've talked about rapid vertical ascent versus riff-raff. Open Science Eleventy versus normal people. The GlamHounds versus small town grocers. ...and we almost invariably start in on how subfields differ in any of these discussions. etc.

Threaded through many of these conversations is the notion of gate keeping. Of defining who gets to play in the sandbox on the basis of certain standards for how they conduct their science. What tools they use. What problems they address. What journals are likely to take their work for publication.

The gates control the entry to paper publication, job appointment and grant funding, among other things. You know, really frickin important stuff.

Which means, in my not at all humble opinion, that we should think pretty hard about our behavior when it touches on this gate keeping.

We need to be very clear on when our jihadist "rules" for how science needs to be done affect right from wrong versus mere personal preference.

I do agree that we want to keep the flagrantly wrong out of the scientific record. Perhaps this is the issue with the triggering post on fMRI but the admission that these practices still continue casts some doubt in my mind. It seems more like a personal preference. Or a jihad.

I do not agree that we need to put in strong controls so that all of science adheres to our personal preferences. Particularly when our personal preferences are for laziness and refelect our unwillingness to synthesize multiple papers or to think hard about the nature of the evidence behind the Abstract's claim. Even more so when our personal preferences really are coming from a desire to winnow a competitive field and make our own lives easier by keeping out the riff raff.

]]>
http://drugmonkey.scientopia.org/2016/07/22/personal-jihads-and-distinguishing-betterworse-science-from-wrong-science/feed/ 9
Group effects. or "effects". http://drugmonkey.scientopia.org/2016/07/22/group-effects-or-effects/ http://drugmonkey.scientopia.org/2016/07/22/group-effects-or-effects/#comments Fri, 22 Jul 2016 17:56:18 +0000 http://drugmonkey.scientopia.org/?p=9182 How many times do we see the publication of a group effect in an animal model that is really just a failure to replicate? Or a failure to completely replicate?

How many of those sex-differences, age-differences or strain-differences have been subjected to replication?

]]>
http://drugmonkey.scientopia.org/2016/07/22/group-effects-or-effects/feed/ 9
Thought of the day  http://drugmonkey.scientopia.org/2016/07/15/thought-of-the-day-46/ http://drugmonkey.scientopia.org/2016/07/15/thought-of-the-day-46/#comments Fri, 15 Jul 2016 21:30:19 +0000 http://drugmonkey.scientopia.org/2016/07/15/thought-of-the-day-46/ I was joshing with the spouse about coups, Trump and the ready availability of pseudo-combat firearms today and a thought later occurred to me.

I'm actually pretty confident in the trigger pullers in my household.

Don't get me wrong, we're not a gun nut family- very likely I'm the only one who has so much as touched a firearm. But if they had to..... 

I was thinking about their respective ages and peers and what not and I'd pick them every time. 

I didn't know I had that particular confidence in my spouse and kids. 

Funny thought to occur. 

]]>
http://drugmonkey.scientopia.org/2016/07/15/thought-of-the-day-46/feed/ 6
Columbia University busted for taking too much overhead on NIH grants http://drugmonkey.scientopia.org/2016/07/15/columbia-university-busted-for-taking-too-much-overhead-on-nih-grants/ http://drugmonkey.scientopia.org/2016/07/15/columbia-university-busted-for-taking-too-much-overhead-on-nih-grants/#comments Fri, 15 Jul 2016 18:37:33 +0000 http://drugmonkey.scientopia.org/?p=9172 From the Manhattan branch of the US Attorney's Office charged:

The United States’ Complaint-In-Intervention (the “Complaint”) alleges that from July 1, 2003, through June 30, 2015, COLUMBIA impermissibly applied its “on-campus” indirect cost rate – instead of the much lower “off-campus” indirect cost rate – when seeking federal reimbursement for 423 NIH grants where the research was primarily performed at off-campus facilities owned and operated by the State of New York and New York City. The Complaint further alleges that COLUMBIA failed to disclose to NIH that it did not own or operate these facilities and that COLUMBIA did not pay for use of the space for most of the relevant period.

...and Columbia University admitted:

COLUMBIA has admitted that it applied the on-campus indirect cost rate to the 423 NIH grants even though the research was primarily performed in space not owned or operated by Columbia, and that it submitted to NIH certified reports that used the on-campus indirect cost rate to calculate the indirect cost amounts claimed by the university.

Ah, those tricky accountants.

Oh. cool. paging down on the complaint we get some specifics:

From July 1, 2003, through June 30, 2015, COLUMBIA’s On-Campus F&A Rate was approximately 61 percent, its Off-Campus F&A Rate was 26 percent, and its Modified Off-Campus F&A Rate was 29.4 percent. The Modified Off-Campus F&A Rate was to be applied to research conducted off-campus but within a certain proximity of the COLUMBIA campus.

Y'know. When I first read this my first thought was that I know some Columbia folks that work off campus at.... oh shit. It's them. It's the drug abuse folks.

COLUMBIA has a collaborative relationship with the New York State Psychiatric Institute (“NYSPI”), a clinical research facility administered by the New York State Office of Mental Health. COLUMBIA faculty perform research in two off-campus buildings owned by the State of New York and operated by NYSPI (the “NYSPI Buildings”). COLUMBIA faculty also perform research in another off-campus building owned and operated by the City of New York (the “City Building”).

For most of the relevant period, COLUMBIA did not pay the State of New York for use of the NYSPI Buildings, and therefore did not incur indirect “facilities-related” costs with respect to the medical research performed in these buildings. Similarly, COLUMBIA did not pay the City of New York for use of the City Building.

Presumably my friends who are the PIs on these grants had no idea. I have no idea what my institution actually charges the NIH as overhead on my grants, all I look at is my direct cost expenditures and balances. But still, sorry to see that it was their research grants that were involved. If nothing else it means that NIDA [Update: I found some details and 22/423 total grants were driect listings from NIDA, although there are what look like subaward identifiers (that may or may not involve other NIDA grants.)] was the entity being ripped off. The settlement was for $9.5 million. It doesn't say how much of this is direct recovery for the fraud and how much is court costs or punishment.

oh, wait. Damn. This looks bad.

COLUMBIA did not state on the applications for the NIH Grants that the research would be primarily performed off-campus, as required. Instead, Columbia frequently included the main address for the College of Physicians & Surgeons in the section of the application that was supposed to list the primary performance location. Even where the NYSPI Buildings or the City Building were listed in that section of the grant application, or mentioned elsewhere in the application, COLUMBIA failed to disclose that these buildings were not owned and operated by the university.

Starting in fiscal year 2009, in lieu of paying rent for use of one of the NYSPI Buildings, the Department of Neuroscience paid NYSPI a portion of the inflated indirect cost recoveries it received from NIH for research projects performed in that building.

This smells a lot more like highly intentional fraud and less like a mistake that someone should have caught. In the pre-award review of the grant, if you ask me. Especially when CU was clearly negotiating the rental arrangements with NYSPI. Someone pretty high up in the office of grants and contracts had to be doing this whole charade intentionally and with planning. There are a handful of other regulatory issues that I don't want to get into which very likely pointed a spotlight on the "performance location" too. This had to be intentional.

Turns out that this was a whistleblower case.

In connection with the filing of the lawsuit and settlement, the Government joined a private whistleblower lawsuit that had previously been filed under seal pursuant to the False Claims Act.

Good for that brave person for bringing this to light.

__
Final thought: I bet you that Columbia University is not the only NIH funded University out there that pulls some shenanigans like this. Now, you would think that there would be some sort of broad and universal alert sent to the Signing Officials of each University that has an on- and off-campus rate. Telling them to get their act together on this or any future investigation that busts them will automatically have the fines tripled. But going by at least one narrow similar area that I've followed over the past couple of decades (the anti-lobbying / grant writing thing) apparently this does not happen. So keep your eyes peeled for the next decade. I bet there will be more of these and that in each case it will again be figured out only via whistleblower.

]]>
http://drugmonkey.scientopia.org/2016/07/15/columbia-university-busted-for-taking-too-much-overhead-on-nih-grants/feed/ 43
Question of the Day http://drugmonkey.scientopia.org/2016/07/15/question-of-the-day-8/ http://drugmonkey.scientopia.org/2016/07/15/question-of-the-day-8/#comments Fri, 15 Jul 2016 14:10:58 +0000 http://drugmonkey.scientopia.org/2016/07/15/question-of-the-day-8/ Did you have a side job as a graduate student or postdoc?
Or as faculty? 

]]>
http://drugmonkey.scientopia.org/2016/07/15/question-of-the-day-8/feed/ 55
The R01 still doesn't pay for itself and reviewers are getting worse http://drugmonkey.scientopia.org/2016/07/11/the-r01-still-doesnt-pay-for-itself-and-reviewers-are-getting-worse/ http://drugmonkey.scientopia.org/2016/07/11/the-r01-still-doesnt-pay-for-itself-and-reviewers-are-getting-worse/#comments Mon, 11 Jul 2016 23:04:29 +0000 http://drugmonkey.scientopia.org/?p=9168 I pointed out some time ago that the full modular R01 grant from the NIH doesn't actually pay for itself.

In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget. Trainees on individual fellowships or training grants, undergrads working for free or work study discount, cross pollination with other grants in the lab (which often leads to whinging like your comment), pilot awards for small bits, faculty hard money time...all of these sources of extra effort are frequently poured into a one-R01 project. I think they are, in essence, necessary.

I had some additional thoughts on this recently.

It's getting worse.

Look, it has always been the case that reviewers want to see more in a grant proposal. More controls, usually. Extra groups to really nail down the full breadth of...whatever it is that you are studying. This really cool other line of converging evidence... anything is possible.

All I can reflect is my own experience in getting my proposals reviewed and in reviewing proposals that are somewhat in the same subfields.

What I see is a continuing spiral of both PI offerings and of reviewer demands.

It's inevitable, really. If you see a proposal chock full of nuts that maybe doesn't quite get over the line of funding because of whatever reason, how can you give a fundable score to a very awesome and tight proposal that is more limited?

Conversely, in the effort to put your best foot forward you, as applicant, are increasingly motivated to throw every possible tool at your disposal into the proposal, hoping to wow the reviewers into submission.

I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct". Years ago people used to crab about "overambitious" proposals but I can't say I've heard that in forever. In this day and age of tight NIH paylines, the promises of doing it all in one R01 full-modular 5 year interval are escalating.

These grants set a tone, btw. I'm here to tell you that I've seen subfield related proposals that do seem feasible, money-wise, get nailed because they are too limited in scope. In some cases there is enough study-section continuity involved for me to be certain that this is due to reviewer contamination from the aforementioned chock-full-o-nuts impossible proposals. Yes, some of this is due to SABV but not all of it. It ranges from "why you no include more co-investigators?" (a subtle spread-the-wealth knock on big labs? maybe) to "You really need to add X, Y and Z to be convincing" (mkay but... $250K dude) to "waaah, I just want to see more" (even though they don't really have a reason to list).

Maybe this is just me being stuck in the rut I was trained in. In my formative years, grant review seemed to expect you would propose a set of studies that you could actually accomplish within the time frame and budget proposed. I seem to remember study section members curbing each other with "Dude, the PI can't fit all that stuff into one proposal, back off.". I used to see revisions get improved scores when the PI stripped a bloated proposal down to a minimalist streamlined version.

Maybe we are just experiencing a meaningless sea change in grant review to where we propose the sky and nobody cares on competing renewal if we managed to accomplish all of that stuff.

]]>
http://drugmonkey.scientopia.org/2016/07/11/the-r01-still-doesnt-pay-for-itself-and-reviewers-are-getting-worse/feed/ 37
Where the NIGMS argument doesn't add up http://drugmonkey.scientopia.org/2016/07/08/where-the-nigms-argument-doesnt-add-up/ http://drugmonkey.scientopia.org/2016/07/08/where-the-nigms-argument-doesnt-add-up/#comments Fri, 08 Jul 2016 20:48:54 +0000 http://drugmonkey.scientopia.org/?p=9154 The NIGMS continues its ongoing argument for funding more labs with ever decreasing amounts of grant funding in a new Feedback Loop post.

This one focuses, yet again, on "productivity" as assessed by publication counts and (this time) citations of those publications. It is, as always, significantly flawed by ignoring the effects of Glamour publications. I've done that before and it is starting to bore me. In short, you cannot compare apples to oranges because of the immense difference in the cost of generating your average Nature paper versus a Brain Research paper. And citations don't help because getting into a Glam journal does not mean your paper will get any particular number of citations. Furthermore, there is very little chance that papers that cost 10 or 20 times more will generate ten or twenty times the citations, on average, given the skew in citation distributions and the fact that Glam journals are only hitting means in the 30-40 range. Finally, their "efficiency" measures completely ignore the tremendous inefficiencies of interrupted funding, which is a reality under the current system and also not necessarily fixed with their spread-the-wealth schemes.

The real issue of the day is the opinion of the fans of NIGMS's "conclusion*", which reads:

Overall, however, the data suggest that supporting a greater number of investigators at moderate funding levels is a better investment strategy than concentrating high amounts of funding in a smaller number of researchers.

The Sally Rockey blog entry on "mythbusting" is relevant here. As of FY2009 about 72% of NIH funded investigators had one RPG. Another 20% had two and maybe 5% had three.

That's all.

The NIGMS data analyses are big on fitting productivity lines to about the single R01 level of direct costs (~$200K per year) and showing how the productivity/cost drops off as the grant funding increases. Take a good look at the most recent analysis. Linear productivity up to $300K direct costs with the 75%ile sustained all the way to $500K. The famous original 2010 analysis by Jeremy Berg at NIGMS is pretty similar in the sense that you don't get much change in the line fit to mean publications until you get to the $600-$700K direct costs range.

There is a critical point in lining up these two bits of information which is that the NIGMS policy intent is not supported by their analysis and it can't be. One or two RPG level from Rockey's post should be interpreted in full modular R01 terms ($250K direct, usually cut to $200K, $225K direct and in NIGMS' case to 4 years by default) with a little bit of float upwards for the rare cases. Consequently, it is obvious that most NIH awardees operate in the ~$200-250K part of NIGMS' dataset. Another 20% operate in the $400-$500K direct range. In other words, well within the linear part of the productivity/cost curve.

Mean publications as represented by the 2010 Berg analysis are increasing linearly well up to the three to four grant level of $750K direct costs.

In either case, the "inefficient" grant levels are being obtained by a vanishingly small number of investigators.

Fine, screw them, right?

Sure....but this does nothing to address either the stated goal of NIGMS in hedging their bets across many labs or the goal of the unfunded, i.e., to increase their chances substantially.

A recent Mike Lauer Blog post showed that about a third of those PI's who seek RPG funding over a rolling 5 year interval achieve funding. Obviously if you take all the multi-grant PIs and cut them down to one tomorrow, you'd be able to bump funded investigators up by 15-20%, assuming the FY2009 numbers are relatively good still**. It isn't precise because if you limit the big guys to one award then these are going to drift up to $499K direct at a minimum and a lot more will have special permission to crest the $500K threshold.

There will be a temporary sigh of relief and some folks will get funded at 26%ile. Sure. And then there will be even more PIs in the game seeking funding and it will continue to be a dogfight to retain that single grant award. And the next round of newbies will face the same steep odds of entry. Maybe even steeper.

So the ONLY way for NIGMS' plan to work is to cut per-PI awards way, way down into the front part of their productivity curves. Well below the point of inflection($300-500K or even $750K depending on measure) where papers-per-grant dollar drops off the linear trend. Even the lowest estimate of $300K direct is more than one full-modular grant. It will take a limit substantially below this level*** to improve perceptions of funding ease or to significantly increase the number of funded labs.

Which makes their argument based on those trends a lie, if they truly intend it to support their "better investment strategy". Changing the number of investigators they support in any fundamental way means limiting per-PI awards to the current full modular limit (with typical reductions) at the least, and very likely substantially below this level to produce anything like a phase change.

That's fine if they want to just assert "we think everyone should only have X amount of direct costs" but it is not so fine if they argue that they have some objective, productivity-based data analysis to support their plans. Because it does not.

__
*This is actually their long standing assertion that all of these seemingly objective analyses are designed to support.

**should be ballpark, given the way Program has been preserving unfunded labs at the expense of extra awards to funded labs these days.

***I think many people arguing in favor of the NIGMS type of "small grants for all" strategy operate from the position that they personally deserve funding. Furthermore that some grant award of full modular level or slightly below is sufficient for them. Any dishonest throwaway nod to other types of research that are more expensive (as NIGMS did "We recognize that some science is inherently more expensive, for example because of the costs associated with human and animal subjects.") is not really meant or considered. This is somewhat narrow and self-involved. Try assuming that all of the two-granters in Rockey's distribution really need that amount of funding (remember the erosion of purchasing power?) and that puts it at more like 92% of awardees that enjoy basic funding at present. Therefore the squeeze should be proportional. Maybe the bench jockeys should be limited to $100K or even $50K in this scenario? Doesn't seem so attractive if you consider taking the same proportional hit, does it?

]]>
http://drugmonkey.scientopia.org/2016/07/08/where-the-nigms-argument-doesnt-add-up/feed/ 22
Repost: Why aren't they citing my papers? http://drugmonkey.scientopia.org/2016/07/07/repost-why-arent-they-citing-my-papers/ http://drugmonkey.scientopia.org/2016/07/07/repost-why-arent-they-citing-my-papers/#comments Thu, 07 Jul 2016 13:47:09 +0000 http://drugmonkey.scientopia.org/?p=9149 As the Impact Factor discussion has been percolating along (Stephen Curry, Björn Brembs, YHN) it has touched briefly on the core valuation of a scientific paper: Citations!

Coincidentally, a couple of twitter remarks today also reinforced the idea that what we are all really after is other people who cite our work.
Dr24hrs:

More people should cite my papers.

I totally agree. More people should cite my papers. Often.

AmasianV:

was a bit discouraged when a few papers were pub'ed recently that conceivably could have cited mine

Yep. I've had that feeling on occasion and it stings. Especially early in the career when you have relatively few publications to your name, it can feel like you haven't really arrived yet until people are citing your work.

Before we get too far into this discussion, let us all pause and remember that all of the specifics of citation numbers, citation speed and citation practices are going to be very subfield dependent. Sometimes our best discussions are enhanced by dissecting these differences but let's try not to act like nobody recognizes this, even though I'm going to do so for the balance of the post....

So, why might you not be getting cited and what can you do about it? (in no particular order)

1) Time. I dealt with this in a prior post on gaming the impact factor by having a lengthy pre-publication queue. The fact of the matter is that it takes a long time for a study that is primarily motivated by your paper to reach publication. As in, several years of time. So be patient.

2) Time (b). As pointed out by Odyssey, sometimes a paper that just appeared reached final draft status 1, 2 or more years ago and the authors have been fighting the publication process ever since. Sure, occasionally they'll slip in a few new references when revising for yet the umpteenth time but this is limited.

3) Your paper doesn't hit the sweet spot. Speaking for myself, my citation practices lean this way for any given point I'm trying to make. The first, best and most recent. Rationale's vary and I would assume most of us can agree that the best, most comprehensive, most elegant and all around most scientifically awesome study is the primary citation. Opinions might vary on primacy but there is a profound sub-current that we must respect the first person to publish something. The most-recent is a nebulous concept because it is a moving target and might have little to do with scientific quality. But all else equal, the more recent citations should give the reader access to the front of the citation thread for the whole body of work. These three concerns are not etched in stone but they inform my citation practices substantially.

4) Journal identity. I don't need to belabor this but suffice it to say some people cite based on the journal identity. This includes Impact Factor, citing papers on the journal to which one is submitting, citing journals thought important to the field, etc. If you didn't happen to publish there but someone else did, you might be passed over.

5) Your paper actually sucks. Look, if you continually fail to get cited when you think you should have been mentioned, maybe your paper(s) just sucks. It is worth considering this. Not to contribute to Imposter Syndrome but if the field is telling you to up your game...up your game.

6) The other authors think your paper sucks (but it doesn't). Water off a duck's back, my friends. We all have our opinions about what makes for a good paper. What is interesting and what is not. That's just the way it goes sometimes. Keep publishing.

7) Nobody knows you, your lab, etc. I know I talk about how anyone can find any paper in PubMed but we all need to remember this is a social business. Scientists cite people they know well, people they've just been chatting with at a poster session and people who have just visited for Departmental seminar. Your work is going to be cited more by people for whom you/it/your lab are most salient. Obviously, you can do something about this factor...get more visible!

8) Shenanigans (a): Sometimes the findings in your paper are, shall we say, inconvenient to the story the authors wish to tell about their data. Either they find it hard to fit it in (even though it is obvious to you) or they realize it compromises the story they wish to advance. Obviously this spans the spectrum from essentially benign to active misrepresentation. Can you really tell which it is? Worth getting angsty about? Rarely.....

9) Shenanigans (b): Sometimes people are motivated to screw you or your lab in some way. They may feel in competition with you and, nothing personal but they don't want to extend any more credit to you than they have to. It happens, it is real. If you cite someone, then the person reading your paper might cite them. If you don't, hey, maybe that person will miss it. Over time, this all contributes to reputation. Other times, you may be on the butt end of disagreements that took place years before. Maybe two people trained in a lab together 30 years ago and still hate each other. Maybe someone scooped someone back in the 80s. Maybe they perceived that a recent paper from your laboratory should have cited them and this is payback time.

10) Nobody knows you, your lab, etc II, electric boogaloo. Cite your own papers. Liberally. The natural way papers come to the attention of the right people is by pulling the threads. Read one paper and then collect all the cited works of interest. Read them and collect the works cited in that paper. Repeat. This is the essence of graduate school if you ask me. And it is a staple behavior of any decent scientist. You pull the threads. So consequently, you need to include all the thread-ends in as many of your own papers as possible. If you don't, why should anyone else? Who else is most motivated to cite your work? Who is most likely to be working on related studies? And if you can't find a place for a citation....

]]>
http://drugmonkey.scientopia.org/2016/07/07/repost-why-arent-they-citing-my-papers/feed/ 16