"We're focused on the pipeline" === "We're focused on molding people into white dudes instead of combating our bias" https://t.co/KBYn2MPixr
— Sarah Mei (@sarahmei) June 28, 2015
Search Results for "ginther"
Jun 29 2015
May 24 2015
I had a thought about Ginther just after hearing a radio piece on the Asian-Americans that are suing Harvard over entrance discrimination.
The charge is that Asian-American students need to have better grades and scores than white students to receive an admissions bid.
The discussion of the Ginther study revolved around the finding that African-American applicant PIs were less likely than PIs of other groups to receive NIH grant funding. This is because Asian-Americans, for example, did as well as white PIs. Our default stance, I assume, is that being a white PI is the best that it gets. So if another group does as well, this is evidence of a lack of bias.
But what if Asian-American PIs submit higher quality applications as a group?
How would we ever know if there was discrination against them in NIH grant award?
Jan 15 2014
Jeremy Berg made a comment
If you look at the data in the Ginther report, the biggest difference for African-American applicants is the percentage of "not discussed" applications. For African-Americans, 691/1149 =60.0% of the applications were not discussed whereas for Whites, 23,437/58,124 =40% were not discussed (see supplementary material to the paper). The actual funding curves (funding probability as a function of priority score) are quite similar (Supplementary Figure S1). If applications are not discussed, program has very little ability to make a case for funding, even if this were to be deemed good policy.
that irritated me because it sounds like yet another version of the feigned-helpless response of the NIH on this topic. It also made me take a look at some numbers and bench race my proposal that the NIH should, right away, simply pick up enough applications from African American PIs to equalize success rates. Just as they have so clearly done, historically, for Early Stage Investigators and very likely done for woman PIs.
Here's the S1 figure from Ginther et al, 2011:
[In the below analysis I am eyeballing the probabilities for illustration's sake. If I'm off by a point or two this is immaterial to the the overall thrust of the argument.]
My knee jerk response to Berg's comment is that there are plenty of African-American PI's applications available for pickup. As in, far more than would be required to make up the aggregate success rate discrepancy (which was about 10% in award probability). So talking about the triage rate is a distraction (but see below for more on that).
There is a risk here of falling into the Privilege-Thinking, i.e. that we cannot possible countenance any redress of discrimination that, gasp, puts the previously underrepresented group above the well represented groups even by the smallest smidge. But looking at Supplementary Fig1 from Gither, and keeping in mind that the African American PI application number is only 2% of the White applications, we can figure out that a substantial effect on African American PI's award probability would cause only an imperceptible change in that for White PI applications. And there's an amazing sweetener....merit.
Looking at the award probability graph from S1 of Ginther, we note that there are some 15% of the African-American PI's grants scoring in the 175 bin (old scoring method, youngsters) that were not funded. About 55-56% of all ethnic/racial category grants in the next higher (worse) scoring bin were funded. So if Program picks up more of the better scoring applications from African American PIs (175 bin) at the expense of the worse scoring applications of White PIs (200 bin), we have actually ENHANCED MERIT of the total population of funded grants. Right? Win/Win.
So if we were to follow my suggestion, what would be the relative impact? Well thanks to the 2% ratio of African-American to White PI apps, it works like this:
Take the 175 scoring bin in which about 88% of white PIs and 85% of AA PIs were successful. Take a round number of 1,000 apps in that scoring bin (for didactic purposes, also ignoring the other ethnicities) and you get a 980/20 White/African-AmericanPI ratio of apps. In that 175 bin we'd need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.
Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it..... 20.8%.
And so on.
(And actually, the percentage changes would be smaller in reality because there is typically not a flat distribution across these bins and there are very likely more applications in each worse-scoring bin compared to the next better-scoring bin. I assumed 1,000 in each bin for my example.)
Another way to look at this issue is to take Berg's triage numbers from above. To move to 40% triage rate for the African-AmericanPI applications, we need to shift 20% (230 applications) into the discussed pile. This represents a whopping 0.4% of the White PI apps being shifted onto the triage pile to keep the numbers discussed the same.
These are entirely trivial numbers in terms of the "hit" to the chances of White PIs and yet you could easily equalize the success rate or award probability for African-American PIs.
It is even more astounding that this could be done by picking up African-American PI applications that scored better than the White PI applications that would go unfunded to make up the difference.
Tell me how this is not a no-brainer for the NIH?
Jan 13 2014
As you know I am distinctly unimpressed with the NIH's response to the Ginther report which identified a disparity in the success rate of African-American PIs when submitting grant applications to the NIH.
The NIH response (i.e., where they have placed their hard money investment in change) has been to blame pipeline issues. The efforts are directed at getting more African-American trainees into the pipeline and, somehow, training them better. The subtext here is twofold.
First, it argues that the problem is that the existing African-American PIs submitting to the NIH just kinda suck. They are deserving of lower success rates! Clearly. Otherwise, the NIH would not be looking in the direction of getting new ones. Right? Right.
Second, it argues that there is no actual bias in the review of applications. Nothing to see here. No reason to ask about review bias or anything. No reason to ask whether the system needs to be revamped, right now, to lead to better outcome.
A journalist has been poking around a bit. The most interesting bits involve Collins' and Tabak's initial response to Ginther and the current feigned-helplessness tack that is being followed.
Regarding the possibility of bias in its own handling of grant applications, the NIH has taken some initial steps, including giving its top leaders bias-awareness training. But a project promised by the NIH's director, Francis S. Collins, to directly test for bias in the agency's grant-evaluation systems has stalled, with officials stymied by the legal and scientific challenges of crafting such an experiment.
"The design of the studies has proven to be difficult," said Richard K. Nakamura, director of the Center for Scientific Review, the NIH division that handles incoming grant applications.
Hmmm. "difficult", eh? Unlike making scientific advances, hey, that stuff is easy. This, however, just stumps us.
Dr. Collins, in his immediate response to the Ginther study, promised to conduct pilot experiments in which NIH grant-review panels were given identical applications, one using existing protocols and another in which any possible clue to the applicant's race—such as name or academic institution—had been removed.
"The well-described and insidious possibility of unconscious bias must be assessed," Dr. Collins and his deputy, Lawrence A. Tabak, wrote at the time.
Oh yes, I remember this editorial distinctly. It seemed very well-intentioned. Good optics. Did we forget that the head of the NIH is a political appointment with all that that entails? I didn't.
The NIH, however, is still working on the problem, Mr. Nakamura said. It hopes to soon begin taking applications from researchers willing to carry out such a study of possible biases in NIH grant approvals, and the NIH also recently gave Molly Carnes, a professor of medicine, psychiatry, and industrial and systems engineering at the University of Wisconsin at Madison, a grant to conduct her own investigation of the matter, Mr. Nakamura said.
The legal challenges include a requirement that applicants get a full airing of their submission, he said. The scientific challenges include figuring out ways to get an unvarnished assessment from a review panel whose members traditionally expect to know anyone qualified in the field, he said.
What a freaking joke. Applicants have to get a full airing and will have to opt-in, eh? Funny, I don't recall ever being asked to opt-in to any of the non-traditional review mechanisms that the CSR uses. These include phone-only reviews, video-conference reviews and online chat-room reviews. Heck, they don't even so much as disclose that this is what happened to your application! So the idea that it is a "legal" hurdle that is solved by applicants volunteering for their little test is clearly bogus.
Second, the notion that a pilot study would prevent "full airing" is nonsense. I see very few alternatives other than taking the same pool of applications and putting them through regular review as the control condition and then trying to do a bias-decreasing review as the experimental condition. The NIH is perfectly free to use the normal, control review as the official review. See? No difference in the "full airing".
I totally agree it will be scientifically difficult to try to set up PI blind review but hey, since we already have so many geniuses calling for blinded review anyway...this is well worth the effort.
But "blind" review is not the only way to go here. How's about simply mixing up the review panels a bit? Bring in a panel that is heavy in precisely those individuals who have struggled with lower success rates- based on PI characteristics, University characteristics, training characteristics, etc. See if that changes anything. Take a "normal" panel and provide them with extensive instruction on the Ginther data. Etc. Use your imagination people, this is not hard.
Disappointingly, the CHE piece contains not one single bit of investigation into the real question of interest. Why is this any different from any other area of perceived disparity between interests and study section outcome at the NIH? From topic domain to PI characteristics (sex and relative age) to University characteristics (like aggregate NIH funding, geography, Congressional district, University type/rank, etc) the NIH is full willing to use Program prerogative to redress the imbalance. They do so by funding grants out of order and, sometimes, by setting up funding mechanisms that limit who can compete for the grants.
In the recent case of young/recently transitioned investigators they have trumpeted the disparity loudly, hamfistedly and brazenly "corrected" the study section disparity with special paylines and out of order pickups that amount to an affirmative action quota system [PDF].
All with exceptionally poor descriptions of exactly why they need to do so, save "we're eating out seed corn" and similar platitudes. All without any attempt to address the root problem of why study sections return poorer scores for early stage investigators. All without proving bias, describing the nature of the bias and without clearly demonstrating the feared outcome of any such bias.
"Eating our seed corn" is a nice catch phrase but it is essentially meaningless. Especially when there are always more freshly trained PHD scientist eager and ready to step up. Why would we care if a generation is "lost" to science? The existing greybeards can always be replaced by whatever fresh faces are immediately available, after all. And there was very little crying about the "lost" GenerationX scientists, remember. Actually, none, outside of GenerationX itself.
The point being, the NIH did not wait for overwhelming proof of nefarious bias. They just acted very directly to put a quota system in place. Although, as we've seen in recent data this has slipped a bit in the past two Fiscal Years, the point remains.
Why, you might ask yourself, are they not doing the same in response to Ginther?
Dec 31 2013
In case my comment never makes it out of moderation at RockTalk....
Interesting to contrast your Big Data and BRAINI approaches with your one for diversity. Try switching those around…”establish a forum..blah, blah…in partnership…blah, blah..to engage” in Big Data. Can’t you hear the outraged howling about what a joke of an effort that would be? It is embarrassing that the NIH has chosen to kick the can down the road and hide behind fake-helplessness when it comes to enhancing diversity. In the case of BRAINI, BigData and yes, discrimination against a particular class of PI applicants (the young) the NIH fixes things with hard money- awards for research projects. Why does it draw back when it comes to fixing the inequality of grant awards identified in Ginther?
When you face up to the reasons why you are in full cry and issuing real, R01 NGA solutions for the dismal plight of ESIs and doing nothing similar for underrepresented PIs then you will understand why the Ginther report found what it did.
ESIs continue, at least six years on, to benefit from payline breaks and pickups. You trumpet this behavior as a wonderful thing. Why are you not doing the same to redress the discrimination against underrepresented PIs? How is it different?
The Ginther bombshell dropped in August of 2011. There has been plenty of time to put in real, effective fixes. The numbers are such that the NIH would have had to fund mere handfuls of new grants to ensure success rate parity. And they could still do all the can-kicking, ineffectual hand waving stuff as well.
And what about you, o transitioning scientists complaining about an "unfair" NIH system stacked against the young? Is your complaint really about fairness? Or is it really about your own personal success?
If it is a principled stand, you should be name dropping Ginther as often as you do the fabled "42 years before first R01" stat.
Jan 28 2013
As noted recently by Bashir, the NIH response to the Ginther report contrasts with their response to certain other issues of grant disparity:
I want to contrast this with NIH actions regarding other issues. In that same blog post I linked there is also discussion of the ongoing early career investigator issues. Here is a selection of some of the actions directed towards that problem.
NIH plans to increase the funding of awards that encourage independence like the K99/R00 and early independence awards, and increase the initial postdoctoral researcher stipend.
In the past NIH has also taken actions in modifying how grants are awarded. The whole Early Stage Investigator designation is part of that. Grant pickups, etc.
I don't want to get all Kanye ("NIH doesn't care about black researchers"), but priorities, be they individual or institutional, really come though not in talk but actions. Now, I don't have any special knowledge about the source or solution to the racial disparity. But the NIH response here seems more along the lines of adequate than overwhelming.
In writing another post, I ran across this 2002 bit in Science. This part stands out:
It's not because the peer-review system is biased against younger people, Tilghman argues. When her NRC panel looked into this, she says, “we could find no data at all [supporting the idea] that young people are being discriminated against.”
Although I might take issue with what data they chose to examine and the difficulty of proving "discrimination" in a subjective process like grant review, the point at hand is larger. The NIH had a panel which could find no evidence of discrimination and they nevertheless went straight to work picking up New Investigator grants out of the order of review to guarantee an equal outcome!
Interesting, this is.
Jan 16 2013
are offered by Bashir.
Even if these recommendations were enacted tomorrow, and worked exactly as hoped, the gains would be slow and marginal. #1 seem to more address the problem of under representation.
Go play over there.
Feb 07 2018
Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.
The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.
NIH Grant review is an inherently conservative process.
The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.
The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.
Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.
This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.
So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.
When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.
My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.
We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!
At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?
I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.
At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?
*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.
***No idea what this threshold should be, btw. But I think there is one.
Jun 18 2016
That extensive quote from a black PI who had participated in the ECR program is sticking with me.
Insider status isn't binary, of course. It is very fluid within the grant-funded science game. There are various spectra along multiple dimensions.
But make no mistake it is real. And Insider status is advantageous. It can be make-or-break crucial to a career at many stages.
I'm thinking about the benefits of being a full reviewer with occasional/repeated ad hoc status or full membership.
One of those benefits is that other reviewers in SEPs or closely related panels are less likely to mess with you.
It isn't any sort of quid pro quo guarantee. Of course not. But I guarantee that a reviewer who thinks this PI might be reviewing her own proposal in the near future has a bias. A review cant. An alerting response. Whatever.
It is different. And, I would submit, generally to the favor of the applicant that possesses this Mutually Assured Destruction power.
The Ginther finding arose from a thousand cuts, I argue. This is possibly one of them.
Jun 17 2016
If I stroke out today it is all the fault of MorganPhD.
Jeffery Mervis continues with coverage of the NIH review situation as it pertains to the disparity for African-American PIs identified in 2011 (that's five years and fifteen funding rounds ago, folks) by the Ginther report.
The main focus for this week is on the Early Career Reviewer program. As you will recall, this blog has advocated continually and consistently for the participation of more junior PIs on grant review panels.
The ECR program was created explicitly to deal with underrepresented groups. However, what happened is that there was immediate opposition which insisted that the ECR program had to be open to all junior faculty/applicants, regardless of representation in the NIH game.
One-quarter of researchers in ECR's first cohort were from minority groups, he notes. “But as we've gone along, there are fewer underrepresented minorities coming into the pool.”
Minorities comprise only 13% of the roughly 5100 researchers accepted into the program (6% African-American and 7% Hispanic), a percentage that roughly matches their current representation on study sections.
Ok, but how have the ECR participants fared?
[Nakamura] said ECR alumni have been more than twice as successful as the typical new investigator in winning an R01 grant.
NIIIIIICE. Except they didn't flog the data as hard as one might hope. This is against the entire NI (or ESI?) population.
The pool of successful ECR alumni includes those who revised their application, sometimes more than once, after getting feedback on a declined proposal. That extra step greatly improves the odds of winning a grant. In contrast, the researchers in the comparison group hadn't gone through the resubmission process.
Not sure if this really means "hadn't" or "hadn't necessarily". The latter makes more sense if they are just comparing to aggregate stats. CSR data miners would have had to work harder to get this isolated to those who hadn't revised yet, and I suspect if they had gone to that effort, they could have presented the ESIs who had at least one revision under their belt. But what about the underrepresented group of PIs that are the focus of all this effort?
It's also hard to interpret the fact that 18% of the successful ECRs were underrepresented minorities because NIH did not report the fraction of minorities among ECR alumni applicants. So it is not clear whether African-Americans participating in the program did any better than the cohort as a whole—suggesting that the program might begin to close the racial gap—or better than a comparable group of minority scientists who were not ECR alumni.
SERIOUSLY Richard Nakamura? You just didn't happen to request your data miners do the most important analysis? How is this even possible?
How on earth can you not be keeping track of applicants to ECR, direct requests from SROs, response rate and subsequent grant and reviewing behavior? It is almost as if you want to look like you are doing something but have no interest in it being informative or in generating actionable intelligence.
Moving along, we get a further insight into Richard Nakamura and his position in this situation.
Nakamura worries that asking minority scientists to play a bigger role in NIH's grantsmaking process could distract them from building up their lab, finding stable funding, and earning tenure. Serving on a study section, he says, means that “those individuals will have less time to write applications. So we need to strike the right balance.”
Paternalistic nonsense. The same thing that Scarpa tried to use to justify his purge of Assistant Professors from study sections. My answer is the same. Let them decide. For themselves. Assistant Professors and underrepresented PIs can decide for themselves if they are ready and able to take up a review opportunity when asked. Don't decide, paternalistically, that you know best and will refrain from asking for their own good, Director Nakamura!
Fascinatingly, Mervis secured an opinion that echoes this. So Nakamura will surely be reading it:
Riggs, the only African-American in his department, thinks the program is too brief to help minority scientists truly become part of the mainstream, and may even exacerbate their sense of being marginalized.
“After I sat on the panel, I realized there was a real network that exists, and I wasn't part of that network,” he says. “My comments as a reviewer weren't taken as seriously. And the people who serve on these panels get really nervous about having people … that they don't know, or who they think are not qualified, or who are not part of the establishment.”
If NIH “wants this to be real,” Riggs suggests having early-career researchers “serve as an ECR and then call them back in 2 years and have them serve a full cycle. I would have loved to do that.”
The person in the best position to decide what is good or bad for his or her career is the investigator themself.
This comment also speaks to my objection to the ECR as a baby-intro version of peer review. It isn't necessary. I first participated on study section in my Asst Prof years as a regular ad hoc with a load of about six grants, iirc. Might have been 2 less than the experienced folks had but it was not a baby-trainee experience in the least. I was treated as a new reviewer, but that was about the extent of it. I thought I was taken seriously and did not feel patronized.
Toni Scarpa to leave CSR