Racial Disparity in K99 Awards and R00 Transitions

(by drugmonkey) Jul 19 2018

Oh, what a shocker.

In the wake of the 2011 Ginther finding [see archives on Ginther if you have been living under a rock] that there was a significant racial bias in NIH grant review, the concrete response of the NIH was to blame the pipeline. Their only real dollar, funded initiatives were to attempt to get more African-American trainees into the science pipeline. The obvious subtext here was that the current PIs, against whom the grant review bias was defined, must be the problem, not the victim. Right? If you spend all your time insisting that since there were not red-fanged, white-hooded peer reviewers overtly proclaiming their hate for black people that peer review can't be the problem, and you put your tepid money initiatives into scraping up more trainees of color, you are saying the current black PIs deserve their fate. Current example: NIGMS trying to transition more underrepresented individuals into faculty ranks, rather than funding the ones that already exist.

Well, we have some news. The Rescuing Biomedical Research blog has a new post up on Examining the distribution of K99/R00 awards by race authored by Chris Pickett.

It reviews success rates of K99 applicants from 2007-2017. Application PI demographics broke down to nearly 2/3 White, ~1/3 Asian, 2% multiracial and 2% black. Success rates: White, 31%, Multiracial, 30.7%, Asian, 26.7%, Black, 16.2%. Conversion to R00 phase rates: White, 80%, Multiracial, 77%, Asian, 76%, Black, 60%.

In terms of Hispanic ethnicity, 26.9% success for K99 and 77% conversion rate, neither significantly different from the nonHispanic rates.

Of course, seeing as how the RBR people are the VerySeriousPeople considering the future of biomedical careers (sorry Jeremy Berg but you hang with these people), the Discussion is the usual throwing up of hands and excuse making.

"The source of this bias is not clear...". " an analysis ...could address". "There are several potential explanations for these data".

and of course
"put the onus on universities"

No. Heeeeeeyyyyyuuullll no. The onus is on the NIH. They are the ones with the problem.

And, as per usual, the fix is extraordinarily simple. As I repeatedly observe in the context of the Ginther finding, the NIH responded to a perception of a disparity in the funding of new investigators with immediate heavy handed top-down quota based affirmative action for many applications from ESI investigators. And now we have Round2 where they are inventing up new quota based affirmative action policies for the second round of funding for these self-same applicants. Note well: the statistical beneficiaries of ESI affirmative action polices are white investigators.

The number of K99 applications from black candidates was 154 over 10 years. 25 of these were funded. To bring this up to the success rate enjoyed by white applicants, the NIH need only have funded 23 more K99s. Across 28 Institutes and Centers. Across 10 years, aka 30 funding cycles. One more per IC per decade to fix the disparity. Fixing the Asian bias would be a little steeper, they'd need to fund another 97, let's round that to 10 per year. Across all 28 ICs.

Now that they know about this, just as with Ginther, the fix is duck soup. The Director pulls each IC Director aside in quiet moment and says 'fix this'. That's it. That's all that would be required. And the Directors just commit to pick up one more Asian application every year or so and one more black application every, checks notes, decade and this is fixed.

This is what makes the NIH response to all of this so damn disturbing. It's rounding error. They pick up grants all the time for reasons way more biased and disturbing than this. Saving a BSD lab that allegedly ran out of funding. Handing out under the table Administrative Supplements for gawd knows what random purpose. Prioritizing the F32 applications from some labs over others. Ditto the K99 apps.

They just need to apply their usual set of glad handing biases to redress this systematic problem with the review and funding of people of color.

And they steadfastly refuse to do so.

For this one specific area of declared Programmatic interest.

When they pick up many, many more grants out of order of review for all their other varied Programmatic interests.

You* have to wonder why.
__
h/t @biochembelle

*and those people you are trying to lure into the pipeline, NIH? They are also wondering why they should join a rigged game like this one.

No responses yet

Zealots

(by drugmonkey) Jul 12 2018

One of my favorite thing about this blog, as you know Dear Reader, is the way it exposes me (and you) to the varied perspectives of academic scientists. Scientists that seemingly share a lot of workplace and career commonalities which, on examination, turn out to differ in both expected and unexpected ways. I think we all learn a lot about the conduct of science in the US and worldwide (to lesser extent) in this process.

Despite numerous pointed discussions about differences of experience and opinion for over a decade now, it still manages to surprise me that so many scientists cannot grasp a simple fact.

The way that you do science, the way the people around you do science and the way you think science should be done are always but one minor variant on a broad, broad distribution of behaviors and habits. Much of this is on clear display from public evidence. The journals that you read. The articles that you read. The ones that you don't but can't possible miss knowing that they exist. Grant funding agencies. Who gets funded. Universities. Med schools within Universities. Research Institutions or foundations. Your colleagues. Your mentors and trainees. Your grad school drinking buddies. Conference friends and academic society behaviors.

It is really hard to miss. IMO.

And yet.

We still have this species of dumbass on the internet that can't get it through his* thick head that his experiences, opinions and, yes, those of his circle of reflecting room buddies and acolytes, is but a drop in the bucket.

And they almost invariable start bleating on about how their perspective is not only the right way to do things but that some other practice is unethical and immoral. Despite the evidence (again, often quite public evidence) that large swaths of scientists do their work in this totally other, and allegedly unethical, way.

The topic of the week is data leeching, aka the OpenAccessEleventy perspective that every data set you generate in your laboratory should be made available in easily understood, carefully curated format for anyone to download. These leeches then insist that anyone should be free to use these data in any way they choose with barely the slightest acknowledgment of the person who generated the data.

Nobody does this. Right? It's a tiny minority of all academic scientific endeavor that meets this standard at present. Limited in the individuals, limited in the data types and limited in the scope even within most individuals who DO share data in this way. Maybe we are moving to a broader adoption of these practices. Maybe we will see significant advance. But we're not there right now.

Pretending we are, with no apparent recognition of the relative proportions across academic science, verges on the insane. Yes, like literally delusional insanity**.

__
*94.67% male

**I am not a psychiatristTM

49 responses so far

Startup Funds That Expire On Grant Award

(by drugmonkey) Jul 11 2018

From the email bag:

My question is: Should institutions pull back start-up funds from new PIs if R01s or equivalents are obtained before funds are burned? Should there be an expiration date for these funds?

Should? Well no, in the best of all possible worlds of course we would wish PIs to retain all possible sources of support to launch their program.

I can, however, see the institutional rationale that startup is for just that, starting. And once in the system by getting a grant award, the thinking goes, a PI should be self-sustaining. Like a primed pump.

And those funds would be better spent on starting up the next lab's pump.

The expiration date version is related, and I assume is viewed as an inducement for the PI to go big or go home. To try. Hard. Instead of eking it out forever to support a lab that is technically in operation but not vigorously enough to land additional extramural funding.

Practically speaking the message from this is to always check the details for a startup package. And if it expires on grant award, or after three years, this makes it important to convert as much of that startup into useful Preliminary Data as possible. Let it prime many pumps.

Thoughts, folks? This person was wondering if this is common. How do your departments handle startup funds?

10 responses so far

Trophy collaborations

(by drugmonkey) Jul 05 2018

Jason Rasgon noted a phenomenon where one is asked to collaborate on a grant proposal but is jettisoned after funding of the award:

I'm sure there are cases where both parties amicably terminate the collaboration but the interesting case is where the PI or PD sheds another investigator without their assent.

Is this common? I can't remember hearing many cases of this. It has happened to me in a fairly minor way once but then again I have not done a whole lot of subs on other people's grants.

17 responses so far

Your Grant in Review: Scientific Premise

(by drugmonkey) Jul 03 2018

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was...great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don't know about all of y'all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don't need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

2 responses so far

Journal Citation Metrics: Bringing the Distributions

(by drugmonkey) Jul 03 2018

The latest Journal Citation Reports has been released, updating us on the latest JIF for our favorite journals. New for this year is....

.....drumroll.......

provision of the distribution of citations per cited item. At least for the 2017 year.

The data ... represent citation activity in 2017 to items published in the journal in the prior two years.

This is awesome! Let's drive right in (click to enlarge the graphs). The JIF, btw is 5.970.

Oh, now this IS a pretty distribution, is it not? No nasty review articles to muck it up and the "other" category (editorials?) is minimal. One glaring omission is that there doesn't appear to be a bar for 0 citations, surely some articles are not cited. This makes interpretation of the article citation median (in this case 5) a bit tricky. (For one of the distributions that follows, I came up with the missing 0 citation articles constituting anywhere from 17 to 81 items. A big range.)

Still, the skew in the distribution is clear and familiar to anyone who has been around the JIF critic voices for any length of time. Rare highly-cited articles skew just about every JIF upward from what your mind things, i.e., that that is the median for the journal. Still, no biggie, right? 5 versus 5.970 is not all that meaningful. If your article in this journal from the past two years got 4-6 citations in 2017 you are doing great, right there in the middle.

Let's check another Journal....

Ugly. Look at all those "Other" items. And the skew from the highly-cited items, including some reviews, is worse. JIF is 11.982 and the article citation median is 7. So among other things, many authors are going to feel like they impostered their way into this journal since a large part of the distribution is going to fall under the JIF. Don't feel bad! Even if you got only 9-11 citations, you are above the median and with 6-8 you are right there in the hunt.

Final entry of the day:

Not too horrible looking although clearly the review articles contribute a big skew, possibly even more than the second journal where the reviews are seemingly more evenly distributed in terms of citations. Now, I will admit I am a little surprised that reviews don't do even better compared with primary review articles. It seems like they would get cited more than this (for both of these journals) to me. The article citation mean is 4 and the JIF is 6.544, making for a slightly greater range than the first one, if you are trying to bench race your citations against the "typical" for the journal.

The first takeaway message from these new distributions, viewed along with the JIF, is that you can get a much better idea of how your articles are fairing (in your favorite journals, these are just three) compared to the expected value for that journal. Sure, sure we all knew at some level that the distribution contributing to JIF was skewed and that median would be a better number to reflect the colloquial sense of typical, average performance for a journal.

The other takeaway is a bit more negative and self-indulgent. I do it so I'll give you cover for the same.

The fun game is to take a look at the articles that you've had rejected at a given journal (particularly when rejection was on impact grounds) but subsequently published elsewhere. You can take your citations in the "JCR" (aka second) year of the two years after it was published and match that up with the citation distribution of the journal that originally rejected your work. In the past, if you met the JIF number, you could be satisfied they blew it and that your article indeed had impact worthy of their journal. Now you can take it a step farther because you can get a better idea of when your article beat the median. Even if your actual citations are below the JIF of the journal that rejected you, your article may have been one that would have boosted their JIF by beating the median.

Still with me, fellow axe-grinders?

Every editorial staff I've ever seen talk about journal business in earnest is concerned about raising the JIF. I don't care how humble or soaring the baseline, they all want to improve. And they all want to beat some nearby competitors. Which means that if they have any sense at all, they are concerned about decreasing the uncited dogs and increasing the articles that will be cited in the JCR year above their JIF. Hopefully these staffs also understand that they should be beating their median citation year over year to improve. I'm not holding my breath on that one. But this new publication of distributions (and the associated chit chat around the campfire) may help with that.

Final snark.

I once heard someone concerned with JIF of a journal insist that they were not "systematically overlooking good papers" meaning, in context, those that would boost their JIF. The rationale for this was that the manuscripts they had rejected were subsequently published in journals with lower JIFs. This is a fundamental misunderstanding. Of course most articles rejected at one JIF level eventually get published down-market. Of course they do. This has nothing to do with the citations they eventually accumulate. And if anything, the slight downgrade in journal cachet might mean that the actual citations slightly under-represent what would have occurred at the higher JIF journal, had the manuscript been accepted there. If Editorial Boards are worried that they might be letting bigger fish get away, they need to look at the actual citations of their rejects, once published elsewhere. And, back to the story of the day, those actual citations need to be compared with the median for article citations rather than the JIF.

4 responses so far

Light still matters

(by drugmonkey) Jul 02 2018

In the midst of all this hoopla about reliability, repeatability, the replication crisis and what not the Editorial Board of the Journal of Neuroscience has launched an effort to recommend best practices. The first one was about electrophysiology. To give you a flavor:

There is a long tradition in neurophysiology of using the number of neurons recorded as the sample size (“n”) in statistical calculations. In many cases, the sample of recorded neurons comes from a small number of animals, yet many statistical analyses make the explicit assumption that a sample constitutes independent observations. When multiple neurons are recorded from a single animal, however, either sequentially with a single electrode or simultaneously with multiple electrodes, each neuron's activity may not, in fact, be independent of the others. Thus, it is important for researchers to account for variability across subjects in data analyses.

I emphasize the "long tradition" part because clearly the Editorial Board does not just mean this effort to nibble around the edges. It is going straight at some long used practices that they think need to change.

There was a long and very good twitter thread from someone which dealt in part with unreliability relating to when one chooses to conduct behavioral tasks in rodents with respect to their daily light cycle. As a reminder, rodents are nocturnal and are most active (aka "awake") in the dark. Humans, as a reminder, are not. So, as you might imagine, there is a lot of rodent research (including behavioral research) that fails to grasp this difference and simply runs the rats in their light cycle. Also known as their inactive part of the day. Aka. "asleep".

I am being totally honest when I say that the response has been astonishing to me. The pushback!

It's totally surprising that we not only got a lot of "it doesn't matter" responses but actually a lot of implication that it is better (without quite saying so directly). I'm not going to run down everything but players include @svmahler, @StephenMaren, @aradprof, @DennisEckmeier, @jdavidjentsch, and @sleepwakeEEG.

There are just too many ludicrous things being said to characterize them all. But, one species of argument is "it doesn't matter [for my endpoint]". The last part is implied. But early in this thread I posted a link to my prior post which discusses two of my favorite papers on this topic. Scheving et al, 1968 showed a four fold difference in mortality rate after a single dose of amphetamine depending on when it was administered. Roberts and colleagues showed that cocaine self-administration changes all across the day in a very nice circadian pattern. I also noted a paper I had discussed very indirectly in a post on contradicting your own stuff. Halberstadt and colleagues (2012) played around with some variables in a very old model from the Geyer lab and found that time of day interacted with other factors to change results in a rat locomotor assay. I mean c'mon, how many thousands of papers use locomotor assays to asssess psychomotor stimulant drugs?

There's some downshifting and muttering in the tweet discussion about "well if it doesn't matter who cares" but nobody has started posting published papers showing where light cycle doesn't matter for their assays (as a main factor or as an interaction). Yet. I'm sure it is just an oversight. Interestingly the tone of this seems to be arguing that it is ridiculous to expect people to do their rat assays in reverse light unless it is proven (I guess by someone else?) that it changes results.

This, my friends, is very much front and center in the "reproducibility crisis" that isn't. Let us return to the above comment at J Neuro about "long traditions". Do you know how hard it is to fight long traditions in scientific subareas? Sure you do. Trying to get funded, or publish resulting studies, that deal with the seemingly minor choices that have been made for a long time is very difficult. Boring and incremental. Some of these things will come out to be negative, i.e., it won't matter what light cycle is used. Good luck publishing those! It's no coincidence that the aforementioned Halberstadt paper is published in a very modest journal. So we end up with a somewhat random assortment of some people doing their work in the animals' inactive period and some in the active period. Rarely is there a direct comparison (i.e., within lab). So who knows what contribution that is....until you try to replicate it yourself. Wasting time and money and adding potential interactions.....very frustrating.

So yes, we would like to know it all, kind of like we'd like to know everything in male and female animals. But we don't. The people getting all angsty over their choice to run rodents in the light next tried the ploy to back and fill with "can't we all get along" type of approach that harmonizes with this sentiment. They aren't wrong, exactly. But let us return to the J Neuro Editorial effort on best practices. There IS a best option here, if we are not going to do it all. There's a slope in your choice of default versus checking the other. And for behavioral experiments that are not explicitly looking at sleepy rats or mice, the best option is running in their active cycle.

There is lots of fun ridiculousness in the thread. I particularly enjoyed the argument that because rats might be exposed briefly to light in the course of trying to do reverse-cycle experiments, we should just default to light cycle running. Right? Like if you walk from the light into a darkened room you suddenly fall asleep? Hell no. And if you are awakened by a light in your face in the middle of the night you are suddenly as awake as if it were broad noon? HAHAHHAHAA! I love it.

Enjoy the threads. Click on those tweeter links above and read the arguments.

__
Roberts DC, Brebner K, Vincler M, Lynch WJ. Patterns of cocaine self-administration in rats produced by various access conditions under a discrete trials procedure. Drug Alcohol Depend. 2002 Aug 1;67(3):291-9. [PubMed]

Scheving LE, Vedral DF, Pauly JE. Daily circadian rhythm in rats to D-amphetamine sulphate: effect of blinding and continuous illumination on the rhythm. Nature. 1968 Aug 10;219(5154):621-2. [PubMed]

9 responses so far

Preprint Ratio

(by drugmonkey) Jun 30 2018

We're approximately one year into the NIH policy that encouraged deposition of manuscripts into preprint servers. My perception is that the number of labs taking the time to do so is steadily increasing.

It is rather slow compared to what I would have expected, going by the grant applications I have reviewed in the past year.

Nevertheless, preprint deposition is getting popular enough that the secondary questions are worth discussing.

How many preprints are too many?

Meaning, is there a ratio of preprints to now-published-in-Journalofology preprints that is of concern?

It is sort of like the way I once viewed listing conference abstracts on a CV. It's all good if you can see a natural progression leading up to eventual publication of a paper. If there are a lot of conference presentations that never led to papers then this seems....worrisome?

So I've been thinking about how preprints may be similar. If one has a ton of preprints that never ever seen to get published, this may be an indication of certain traits. Bad traits. Inability to close type of traits.

So I have been thinking that one of the things guiding my preprint behavior is how many my lab has at a given time that have not advanced to publication yet. And maybe there are times when waiting to upload more preprints is advisable.

Thoughts, Dear Reader?

16 responses so far

Faculty Retention: The Rule of 10

(by drugmonkey) Jun 21 2018

There's a thread on faculty retention (or lack thereof, really) on the twitts today:

I know this is a little weird for the segments of my audience that are facing long odds even to land a faculty job and for those junior faculty who are worried about tenure. Why would a relatively secure Professor look for a job in a different University? Well.....reasons.

As is typical, the thread touched on the question of why Universities won't work harder in advance to keep their faculty so delightfully happy that they would never dream of leaving.

Eventually I mentioned my theory of how Administration views retention of their faculty.

I think Administration (and this is not just academics, I think this applies pretty broadly) operates from the suspicion that workers always complain and most will never do anything about it. I think they suppose that for every 10 disgruntled employees, only 5 will even bother to apply elsewhere. Of these maybe three will get serious offers. Ultimately only one will leave*.

So why invest in 10 to keep 1?

This, anyway, is what I see as motivating much of the upper management thinking on what appear to be inexplicably wasteful faculty departures.

Reality is much more nuanced.

I think one of the biggest mistakes being made is that by the time a last-ditch, generally half-arsed retention ploy is attempted it can be psychologically too late. The departing faculty member is simply too annoyed at the current Uni and too dazzled by the wooing from the new Uni to let any retention offer sway their feelings. The second biggest mistake is that if there is an impression created that "everybody is leaving" and "nobody is being offered reasonable retention" this can spur further attempts to exit the building before the roof caves in.

Yes, I realize some extremely wealthy private Universities all covered in Ivy have the $$ to keep all their people happy all of the time. This is not in any way an interesting case. Most Universities have to be efficient. Spending money on faculty that are going to stay anyway may be a waste, better used elsewhere. Losing too many faculty that you've spent startup costs on is also inefficient.

So how would you strike the right balance if you were Dean at a R1 University solidly in the middle of the pack with respect to resources?
__
*Including by method of bribing one or more of the "serious offers" crowd to stay via the mysteries of the RetentionPackageTM

17 responses so far

On PO/PI interactions to steer the grant to the PI's laboratory

(by drugmonkey) Jun 18 2018

There has been a working group of the Advisory Committee to the Director (of NIH, aka Francis Collins) which has been examining the Moderate Alcohol and Cardiovascular Health Trial in the wake of a hullabaloo that broke into public earlier this year. Background on this from Jocelyn Kaiser at Science, from the NYT, and the WaPo. (I took up the sleazy tactics of the alleged profession of journalism on this issue here.)

The working group's report is available now [pdf].

Page 7 of that report:

There were sustained interactions (from at least 2013) between the eventual Principal Investigator (PI) of the MACH trial and three members of NIAAA leadership prior to, and during development of, FOAs for planning and main grants to fund the MACH trial

These interactions appear to have provided the eventual PI with a competitive advantage not available to other applicants, and effectively steered funding to this investigator

Page 11:

NIH Institutes, Centers, and Offices (ICOs) should ensure that program staff do not inappropriately provide non-public information, or engage in deliberations that either give the appearance of, or provide, an advantage to any single, or subset of, investigator(s)

The NIH should examine additional measures to assiduously avoid providing, or giving the appearance of providing, an advantage to any single, or subset of, investigator(s) (for example, in guiding the scientific substance of preparing grant applications or responding to reviewer comments)

The webcast of the meeting of the ACD on Day 2 covers the relevant territory but is not yet available in archived format. I was hoping to find the part where Collins apparently expressed himself on this topic, as described here.

In the wake of the decision, Collins said NIH officials would examine other industry-NIH ties to make sure proper procedures have been followed, and seek out even “subtle examples of cozy relationships” that might undermine research integrity.

When I saw all of this I could only wonder if Francis Collins is familiar with the RFA process at the NIH.

If you read RFAs and take the trouble to see what gets funded out of them you come to the firm belief that there are a LOT of "sustained interactions" between the PO(s) that are pushing the RFA and the PI that is highly desired to be the lucky awardee. The text of the RFAs in and of themselves often "giv(e) the appearance of providing, and advantage to any single, or subset of, investigator(s)". And they sure as heck provide certain PIs with "a competitive advantage not available to other applicants".

This is the way RFAs work. I am convinced. It is going to take on huge mountain of evidence to the contrary to counter this impression which can be reinforced by looking at some of the RFAs in your closest fields of interest and seeing who gets funded and for what. If Collins cares to include failed grant applications from those PIs that lead up to the RFA being generated (in some cases) I bet he finds that this also supports the impression.

I really wonder sometimes.

I wonder if NIH officialdom is really this clueless about how their system works?

...or do they just have zero compunction about dissembling when they know full well that these cozy little interactions between PO and favored PI working to define Funding Opportunity Announcements are fairly common?

__
Disclaimer: As always, Dear Reader, I have related experiences. I've competed unsuccessfully on more than one occasion for a targeted FOA where the award went to the very obvious suspect lab. I've also competed successfully for funding on a topic for which I originally sought funding under those targeted FOAs- that takes the sting out. A little. I also suspect I have at least once received grant funding that could fairly be said to be the result of "sustained interactions" between me and Program staff that provided me "a competitive advantage" although I don't know the extent to which this was not available to other PIs.

10 responses so far

Older posts »