Archive for the 'Uncategorized' category

NIH Director Collins and CSR Director Nakamura continue to kick the funding disparity can down the road

A News piece in Science by Jeffrey Mervis details the latest attempt of the NIH to kick the Ginther can down the road.

Armed with new data showing black applicants suffer a 35% lower chance of having a grant proposal funded than their white counterparts, NIH officials are gearing up to test whether reviewers in its study sections give lower scores to proposals from African-American applicants. They say it’s one of several possible explanations for a disparity in success rates first documented in a 2011 report by a team led by economist Donna Ginther of the University of Kansas, Lawrence.

Huh. 35%? I thought Ginther estimated more like a 13% difference? Oh wait. That's the award probability difference. About 16% versus 29% for white applicants which would be about a 45% lower chance. And this shows "78-90% the rate of white...applicants". And there was Nakamura quoted in another piece in Science:

At NIH, African-American researchers “receive awards at “55% to 60% the rate of white applicants,” Nakamura said. “That's a huge disparity that we have not yet been able to seriously budge,” despite special mentoring and networking programs, as well as an effort to boost the number of scientists from underrepresented minorities who evaluate proposals.

Difference vs rate vs lower chance.... Ugh. My head hurts. Anyway you spin it, African-American applicants are screwed. Substantially so.

Back to the Mervis piece for some factoids.

Ginther..noted...black researchers are more likely to have their applications for an R01 grant—the bread-and-butter NIH award that sustains academic labs—thrown out without any discussion...black scientists are less likely to resubmit a revised proposal ...whites submit at a higher rate than blacks...

So, what is CSR doing about it now? OK HOLD UP. LET ME REMIND YOU IT IS FIVE YEARS LATER. FIFTEEN FUNDING ROUNDS POST-GINTHER. Ahem.

The bias study would draw from a pool of recently rejected grant applications that have been anonymized to remove any hint of the applicant’s race, home institution, and training. Reviewers would be asked to score them on a one-to-nine scale using NIH’s normal rating system.

It's a start. Of course, this is unlikely to find anything. Why? Because the bias at grant review is a bias of identity. It isn't that reviewers are biased against black applicants, necessarily. It is that they are biased for white applicants. Or at the very least they are biased in favor of a category of PI ("established, very important") that just so happens to be disproportionately white. Also, there was this interesting simulation by Eugene Day that showed a bias that is smaller than the non-biased variability in a measurement can have large effects on something like a grant funding system [JournalLink].

Ok, so what else are they doing?

NIH continues to wrestle with the implications of the Ginther report. In 2014, in the first round of what NIH Director Francis Collins touted as a 10-year, $500 million initiative to increase the diversity of the scientific workforce, NIH gave out 5-year, $25 million awards to 10 institutions that enroll large numbers of minority students and created a national research mentoring network.

As you know, I am not a fan of these pipeline-enhancing responses. They say, in essence, that the current population of black applicant PIs is the problem. That they are inferior and deserve to get worse scores at peer review. Because what else does it mean to say the big money response of the NIH is to drum up more black PIs in the future by loading up the trainee cannon now?

This is Exhibit A of the case that the NIH officialdom simply cannot admit that there might be unfair biases at play that caused the disparity identified in Ginther and reinforced by the other mentioned analyses. The are bound and determined to prove that their system is working fine, nothing to see here.

So....what else ?

A second intervention starting later this year will tap that fledgling mentoring network to tutor two dozen minority scientists whose R01 applications were recently rejected. The goal of the intervention, which will last several months, is to prepare the scientists to have greater success on their next application. A third intervention will educate minority scientists on the importance of resubmitting a rejected proposal, because resubmitted proposals are three times more likely to be funded than a de novo application from a researcher who has never been funded by NIH.

Oh ff..... More of the same. Fix the victims.

Ah, here we go. Mervis finally gets around to explaining that 35% number

NIH officials recently updated the Ginther study, which examined a 2000–2006 cohort of applicants, and found that the racial disparity persists. The 35% lower chance of being funded comes from tracking the success rates of 1054 matched pairs of white and black applicants from 2008 to 2014. Black applicants continue to do less well at each stage of the process.

I wonder if they will be publishing that anywhere we can see it?

But here's the kicker. Even faced with the clear evidence from their own studies, the highest honchos still can't see it.

One issue that hung in the air was whether any of the disparity was self-inflicted. Specifically, council members and NIH officials pondered the tendency of African-American researchers to favor certain research areas, such as health disparities, women’s health, or hypertension and diabetes among minority populations, and wondered whether study sections might view the research questions in those areas as less compelling. Valantine called it a propensity “to work on issues that resonate with their core values.” At the same time, she said the data show minorities also do less well in competition with their white peers in those fields.

Collins offered another possibility. “I’ve heard stories that they might have been mentored to go into those areas as a better way to win funding,” he said. “The question is, to what extent is it their intrinsic interest in a topic, and to what extent have they been encouraged to go in that direction?”

Look, Ginther included a huge host of covariate analyses that they conducted to try to make the disparity go away. Now they've done a study with matched pairs of investigators. Valantine's quote may refer to this or to some other analysis I don't know but obviously the data are there. And Collins is STILL throwing up blame-the-victim chaff.

Dude, I have to say, this kind of denialist / crank behavior has a certain stench to it. The data are very clear and very consistent. There is a funding disparity.

This is a great time to remind everyone that the last time a major funding disparity came to the attention of the NIH it was the fate of the early career investigators. The NIH invented up the ESI designation, to distinguish it from the well established New Investigator population, and immediately started picking up grants out of the order of review. Establishing special quotas and paylines to redress the disparity. There was no talk of "real causes". There was not talk of strengthening the pipeline with better trainees so that one day, far off, they magically could better compete with the established. Oh no. They just picked up grants. And a LOT of them.

I wonder what it would take to fix the African-American PI disparity...

Ironically, because the pool of black applicants is so small, it wouldn’t take much to eliminate the disparity: Only 23 more R01 applications from black researchers would need to be funded each year to bring them to parity.

Are you KIDDING me? That's it?????

Oh right. I already figured this one out for them. And I didn't even have the real numbers.

In that 175 bin we'd need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.

Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it..... 20.8%.

Mere handfuls. I had probably overestimated how many black PIs were seeking funding. If this Mervis piece is to be trusted and it would only take 23 pickups across the entire NIH to fix the problem....

I DON'T UNDERSTAND WHAT FRANCIS COLLINS' PROBLEM IS.

Twenty three grants is practically rounding error. This is going to shake out to one or maybe three grants per year for the ICs, depending on size and what not.

Heck, I bet they fund this many grants every year by mistake. It's a big system. You think they don't have a few whoopsies sneak by every now and again? Of course they do.

But god forbid they should pick up 23 measly R01s to fix the funding disparity.

37 responses so far

Academia is a fantastical dream weaver and you need to wake up

Jun 08 2016 Published by under Uncategorized

Higher education in the US weaves, for many students, a fantastical dream. 

You can do what you want and people will pay you for it!

Any intellectual pursuit that interests your young brain will end up as a paying career! 

This explains why there are so many English majors who can't get jobs upon graduation. I know, an easy target. Also see Comm majors. 

But we academic scientists are the absolute worst at this. 
It results in a pool of postdoc scientist PhDs who are morally outraged to find out the world doesn't actually work that way. 

Yes. High JIF pubs and copious grant funding are viewed as more important than excellent teaching reviews and six-sigma chili peppers or wtfever.
In another context, yeah, maybe translational research is a tiny bit easier to fund than your obsession with esoteric basic research questions. 

29 responses so far

Outcome of SABV initiative

Jun 08 2016 Published by under Uncategorized

I was just thinking the coming 9 mo should reveal a steady trickle of one to two Figure stabs at sex-differences comparisons.

I'm predicting that some of the people who generated their first studies as Preliminary Data to head off SABV grant critique are going to publish.

Yes, even if results were negative. They'll need this for the next round to excuse the failure to include the female animals in the next proposals.

9 responses so far

Your Grant in Review: Scientific Premise

May 28 2016 Published by under Uncategorized

I am starting to suspect that the Scientific Premise review item will finally communicate overall excitement/boredom to the applicant. This will be something to attend closely when deciding to revise an application or just to start over. 

16 responses so far

May 4th

May 04 2016 Published by under Uncategorized

4 responses so far

Labor

Apr 22 2016 Published by under Uncategorized

If you have a laboratory that has one postdoc, one grad student and on average has two undergrad volunteers most of the time, you don't run a two person lab. You run a four person lab.

Reflexively appealing to how they have to be trained in a ploy to pretend you aren't using their labor is nonsense.

119 responses so far

Ridiculous shit I actually say

Apr 15 2016 Published by under Uncategorized

HAHAHHHA. I am so full of myself  today 
I actually said this 

It's like cult rescue though. You don't try to rehab the head, you try to get the innocents out b4 the FlavorAde is poured

(Yes, it was a discussion of Glamour culture of science. As if you couldn't guess.)

5 responses so far

Representative Images

Apr 15 2016 Published by under Uncategorized

New rule: Claims of a "representative" image should have to be supported by submission of 2 better ones that were not included.

It works like this.

Line up your 9 images that were quantified for the real analysis of the outcome. In the order by which they appear to follow your desired interpretation of the mean effect.

Your "representative" image is #5. So you should have to prove your claim to have presented a representative image in peer review by providing #8 and #9.

My prediction is that the population of published image data would get a lot uglier, less "clear" and would more accurately reflect reality.

55 responses so far

SCOTUS

Feb 24 2016 Published by under Uncategorized

In the interest of eliciting the most hilarious hypocritical illogic from the Republicans, Obama should nominate .....?

My nominee is Senator Lindsay Graham.

29 responses so far

Repost: If you are going to talk about "tiers", then you'd better own that

We have been talking about the scientific journal ecosphere in the context of Michael Eisen's push to get more biomedical scientists to use pre-print servers to publicize their work prior to publication in a traditional journal. This push, recently aided and abetted by Leslie Vosshall, exposes a deep divide in the understanding of the broad scope of science. It is my view that part of the reason the elite (both are HHMI funded investigators, eliteness gets no better in the US) have trouble understanding the points made by us riffraff is related to the fact they don't understand the following. The main issue is that the elite are working at the first tier level. Second tier is a function not of their science but of the competition for limited resources. Any farther down the chain and it is all the same to them - they really have no understanding of how life works for those who operate in the Tiers below.

This post originally appeared on the blog 11 Feb 2013.


SevenTierCakeOccasionally during the review of careers or grant applications you will see dismissive comments on the journals in which someone has published their work. This is not news to you. Terms like "low-impact journals" are wonderfully imprecise and yet deliciously mean. Yes, it reflects the fact that the reviewer himself couldn't be bothered to actually review the science IN those paper, nor to acquaint himself with the notorious skew of real world impact that exists within and across journals.

More hilarious to me is the use of the word "tier". As in "The work from the prior interval of support was mostly published in second tier journals...".

It is almost always second tier that is used.

But this is never correct in my experience.

If we're talking Impact Factor (and these people are, believe it) then there is a "first" tier of journals populated by Cell, Nature and Science.

In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the "second tier".

A jump down to the IF 12 or so of PNAS most definitely represents a different "tier" if you are going to talk about meaningful differences/similarities in IF.

Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.

So for the most part when people are talking about "second tier journals" they are probably down at the FIFTH tier- 4-6 IF in my estimation.

I also argue that the run of the mill society level journals extend below this fifth tier to a "the rest of the pack" zone in which there is a meaningful perception difference from the fifth tier. So.... Six tiers.

Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn't it?)

So there you have it. If you* are going to use "tier" to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?

___
*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use "second tier journal" to refer to what is in the fifth or sixth tier of IFs. Always.

**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as "second tier". Go nuts.

11 responses so far

Older posts »