"We're focused on the pipeline" === "We're focused on molding people into white dudes instead of combating our bias" https://t.co/KBYn2MPixr
— Sarah Mei (@sarahmei) June 28, 2015
Search Results for "Ginther"
Jun 29 2015
May 24 2015
I had a thought about Ginther just after hearing a radio piece on the Asian-Americans that are suing Harvard over entrance discrimination.
The charge is that Asian-American students need to have better grades and scores than white students to receive an admissions bid.
The discussion of the Ginther study revolved around the finding that African-American applicant PIs were less likely than PIs of other groups to receive NIH grant funding. This is because Asian-Americans, for example, did as well as white PIs. Our default stance, I assume, is that being a white PI is the best that it gets. So if another group does as well, this is evidence of a lack of bias.
But what if Asian-American PIs submit higher quality applications as a group?
How would we ever know if there was discrination against them in NIH grant award?
Jan 15 2014
Jeremy Berg made a comment
If you look at the data in the Ginther report, the biggest difference for African-American applicants is the percentage of "not discussed" applications. For African-Americans, 691/1149 =60.0% of the applications were not discussed whereas for Whites, 23,437/58,124 =40% were not discussed (see supplementary material to the paper). The actual funding curves (funding probability as a function of priority score) are quite similar (Supplementary Figure S1). If applications are not discussed, program has very little ability to make a case for funding, even if this were to be deemed good policy.
that irritated me because it sounds like yet another version of the feigned-helpless response of the NIH on this topic. It also made me take a look at some numbers and bench race my proposal that the NIH should, right away, simply pick up enough applications from African American PIs to equalize success rates. Just as they have so clearly done, historically, for Early Stage Investigators and very likely done for woman PIs.
Here's the S1 figure from Ginther et al, 2011:
[In the below analysis I am eyeballing the probabilities for illustration's sake. If I'm off by a point or two this is immaterial to the the overall thrust of the argument.]
My knee jerk response to Berg's comment is that there are plenty of African-American PI's applications available for pickup. As in, far more than would be required to make up the aggregate success rate discrepancy (which was about 10% in award probability). So talking about the triage rate is a distraction (but see below for more on that).
There is a risk here of falling into the Privilege-Thinking, i.e. that we cannot possible countenance any redress of discrimination that, gasp, puts the previously underrepresented group above the well represented groups even by the smallest smidge. But looking at Supplementary Fig1 from Gither, and keeping in mind that the African American PI application number is only 2% of the White applications, we can figure out that a substantial effect on African American PI's award probability would cause only an imperceptible change in that for White PI applications. And there's an amazing sweetener....merit.
Looking at the award probability graph from S1 of Ginther, we note that there are some 15% of the African-American PI's grants scoring in the 175 bin (old scoring method, youngsters) that were not funded. About 55-56% of all ethnic/racial category grants in the next higher (worse) scoring bin were funded. So if Program picks up more of the better scoring applications from African American PIs (175 bin) at the expense of the worse scoring applications of White PIs (200 bin), we have actually ENHANCED MERIT of the total population of funded grants. Right? Win/Win.
So if we were to follow my suggestion, what would be the relative impact? Well thanks to the 2% ratio of African-American to White PI apps, it works like this:
Take the 175 scoring bin in which about 88% of white PIs and 85% of AA PIs were successful. Take a round number of 1,000 apps in that scoring bin (for didactic purposes, also ignoring the other ethnicities) and you get a 980/20 White/African-AmericanPI ratio of apps. In that 175 bin we'd need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.
Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it..... 20.8%.
And so on.
(And actually, the percentage changes would be smaller in reality because there is typically not a flat distribution across these bins and there are very likely more applications in each worse-scoring bin compared to the next better-scoring bin. I assumed 1,000 in each bin for my example.)
Another way to look at this issue is to take Berg's triage numbers from above. To move to 40% triage rate for the African-AmericanPI applications, we need to shift 20% (230 applications) into the discussed pile. This represents a whopping 0.4% of the White PI apps being shifted onto the triage pile to keep the numbers discussed the same.
These are entirely trivial numbers in terms of the "hit" to the chances of White PIs and yet you could easily equalize the success rate or award probability for African-American PIs.
It is even more astounding that this could be done by picking up African-American PI applications that scored better than the White PI applications that would go unfunded to make up the difference.
Tell me how this is not a no-brainer for the NIH?
Jan 13 2014
As you know I am distinctly unimpressed with the NIH's response to the Ginther report which identified a disparity in the success rate of African-American PIs when submitting grant applications to the NIH.
The NIH response (i.e., where they have placed their hard money investment in change) has been to blame pipeline issues. The efforts are directed at getting more African-American trainees into the pipeline and, somehow, training them better. The subtext here is twofold.
First, it argues that the problem is that the existing African-American PIs submitting to the NIH just kinda suck. They are deserving of lower success rates! Clearly. Otherwise, the NIH would not be looking in the direction of getting new ones. Right? Right.
Second, it argues that there is no actual bias in the review of applications. Nothing to see here. No reason to ask about review bias or anything. No reason to ask whether the system needs to be revamped, right now, to lead to better outcome.
A journalist has been poking around a bit. The most interesting bits involve Collins' and Tabak's initial response to Ginther and the current feigned-helplessness tack that is being followed.
Regarding the possibility of bias in its own handling of grant applications, the NIH has taken some initial steps, including giving its top leaders bias-awareness training. But a project promised by the NIH's director, Francis S. Collins, to directly test for bias in the agency's grant-evaluation systems has stalled, with officials stymied by the legal and scientific challenges of crafting such an experiment.
"The design of the studies has proven to be difficult," said Richard K. Nakamura, director of the Center for Scientific Review, the NIH division that handles incoming grant applications.
Hmmm. "difficult", eh? Unlike making scientific advances, hey, that stuff is easy. This, however, just stumps us.
Dr. Collins, in his immediate response to the Ginther study, promised to conduct pilot experiments in which NIH grant-review panels were given identical applications, one using existing protocols and another in which any possible clue to the applicant's race—such as name or academic institution—had been removed.
"The well-described and insidious possibility of unconscious bias must be assessed," Dr. Collins and his deputy, Lawrence A. Tabak, wrote at the time.
Oh yes, I remember this editorial distinctly. It seemed very well-intentioned. Good optics. Did we forget that the head of the NIH is a political appointment with all that that entails? I didn't.
The NIH, however, is still working on the problem, Mr. Nakamura said. It hopes to soon begin taking applications from researchers willing to carry out such a study of possible biases in NIH grant approvals, and the NIH also recently gave Molly Carnes, a professor of medicine, psychiatry, and industrial and systems engineering at the University of Wisconsin at Madison, a grant to conduct her own investigation of the matter, Mr. Nakamura said.
The legal challenges include a requirement that applicants get a full airing of their submission, he said. The scientific challenges include figuring out ways to get an unvarnished assessment from a review panel whose members traditionally expect to know anyone qualified in the field, he said.
What a freaking joke. Applicants have to get a full airing and will have to opt-in, eh? Funny, I don't recall ever being asked to opt-in to any of the non-traditional review mechanisms that the CSR uses. These include phone-only reviews, video-conference reviews and online chat-room reviews. Heck, they don't even so much as disclose that this is what happened to your application! So the idea that it is a "legal" hurdle that is solved by applicants volunteering for their little test is clearly bogus.
Second, the notion that a pilot study would prevent "full airing" is nonsense. I see very few alternatives other than taking the same pool of applications and putting them through regular review as the control condition and then trying to do a bias-decreasing review as the experimental condition. The NIH is perfectly free to use the normal, control review as the official review. See? No difference in the "full airing".
I totally agree it will be scientifically difficult to try to set up PI blind review but hey, since we already have so many geniuses calling for blinded review anyway...this is well worth the effort.
But "blind" review is not the only way to go here. How's about simply mixing up the review panels a bit? Bring in a panel that is heavy in precisely those individuals who have struggled with lower success rates- based on PI characteristics, University characteristics, training characteristics, etc. See if that changes anything. Take a "normal" panel and provide them with extensive instruction on the Ginther data. Etc. Use your imagination people, this is not hard.
Disappointingly, the CHE piece contains not one single bit of investigation into the real question of interest. Why is this any different from any other area of perceived disparity between interests and study section outcome at the NIH? From topic domain to PI characteristics (sex and relative age) to University characteristics (like aggregate NIH funding, geography, Congressional district, University type/rank, etc) the NIH is full willing to use Program prerogative to redress the imbalance. They do so by funding grants out of order and, sometimes, by setting up funding mechanisms that limit who can compete for the grants.
In the recent case of young/recently transitioned investigators they have trumpeted the disparity loudly, hamfistedly and brazenly "corrected" the study section disparity with special paylines and out of order pickups that amount to an affirmative action quota system [PDF].
All with exceptionally poor descriptions of exactly why they need to do so, save "we're eating out seed corn" and similar platitudes. All without any attempt to address the root problem of why study sections return poorer scores for early stage investigators. All without proving bias, describing the nature of the bias and without clearly demonstrating the feared outcome of any such bias.
"Eating our seed corn" is a nice catch phrase but it is essentially meaningless. Especially when there are always more freshly trained PHD scientist eager and ready to step up. Why would we care if a generation is "lost" to science? The existing greybeards can always be replaced by whatever fresh faces are immediately available, after all. And there was very little crying about the "lost" GenerationX scientists, remember. Actually, none, outside of GenerationX itself.
The point being, the NIH did not wait for overwhelming proof of nefarious bias. They just acted very directly to put a quota system in place. Although, as we've seen in recent data this has slipped a bit in the past two Fiscal Years, the point remains.
Why, you might ask yourself, are they not doing the same in response to Ginther?
Dec 31 2013
In case my comment never makes it out of moderation at RockTalk....
Interesting to contrast your Big Data and BRAINI approaches with your one for diversity. Try switching those around…”establish a forum..blah, blah…in partnership…blah, blah..to engage” in Big Data. Can’t you hear the outraged howling about what a joke of an effort that would be? It is embarrassing that the NIH has chosen to kick the can down the road and hide behind fake-helplessness when it comes to enhancing diversity. In the case of BRAINI, BigData and yes, discrimination against a particular class of PI applicants (the young) the NIH fixes things with hard money- awards for research projects. Why does it draw back when it comes to fixing the inequality of grant awards identified in Ginther?
When you face up to the reasons why you are in full cry and issuing real, R01 NGA solutions for the dismal plight of ESIs and doing nothing similar for underrepresented PIs then you will understand why the Ginther report found what it did.
ESIs continue, at least six years on, to benefit from payline breaks and pickups. You trumpet this behavior as a wonderful thing. Why are you not doing the same to redress the discrimination against underrepresented PIs? How is it different?
The Ginther bombshell dropped in August of 2011. There has been plenty of time to put in real, effective fixes. The numbers are such that the NIH would have had to fund mere handfuls of new grants to ensure success rate parity. And they could still do all the can-kicking, ineffectual hand waving stuff as well.
And what about you, o transitioning scientists complaining about an "unfair" NIH system stacked against the young? Is your complaint really about fairness? Or is it really about your own personal success?
If it is a principled stand, you should be name dropping Ginther as often as you do the fabled "42 years before first R01" stat.
Jan 28 2013
As noted recently by Bashir, the NIH response to the Ginther report contrasts with their response to certain other issues of grant disparity:
I want to contrast this with NIH actions regarding other issues. In that same blog post I linked there is also discussion of the ongoing early career investigator issues. Here is a selection of some of the actions directed towards that problem.
NIH plans to increase the funding of awards that encourage independence like the K99/R00 and early independence awards, and increase the initial postdoctoral researcher stipend.
In the past NIH has also taken actions in modifying how grants are awarded. The whole Early Stage Investigator designation is part of that. Grant pickups, etc.
I don't want to get all Kanye ("NIH doesn't care about black researchers"), but priorities, be they individual or institutional, really come though not in talk but actions. Now, I don't have any special knowledge about the source or solution to the racial disparity. But the NIH response here seems more along the lines of adequate than overwhelming.
In writing another post, I ran across this 2002 bit in Science. This part stands out:
It's not because the peer-review system is biased against younger people, Tilghman argues. When her NRC panel looked into this, she says, “we could find no data at all [supporting the idea] that young people are being discriminated against.”
Although I might take issue with what data they chose to examine and the difficulty of proving "discrimination" in a subjective process like grant review, the point at hand is larger. The NIH had a panel which could find no evidence of discrimination and they nevertheless went straight to work picking up New Investigator grants out of the order of review to guarantee an equal outcome!
Interesting, this is.
Jan 16 2013
are offered by Bashir.
Even if these recommendations were enacted tomorrow, and worked exactly as hoped, the gains would be slow and marginal. #1 seem to more address the problem of under representation.
Go play over there.
May 07 2018
The Director of the NIH and the Deputy Director in charge of the office of extramural research have posted a blog post about The Issue that Keeps Us Awake at Night. It is the plight of the young investigator, going from what they have written.
The Working Group is also wrestling with the issue that keeps us awake at night – considering how to make well-informed strategic investment decisions to nurture and further diversify the biomedical research workforce in an environment filled with high-stakes opportunity costs. If we are going to support more promising early career investigators, and if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere. That will likely mean some belt-tightening in other quarters, which is rarely welcomed by the those whose belts are being taken in by a notch or two.
They plan to address this by relying on data and reports that are currently being generated. I suspect this will not be enough to address their goal.
I recently posted a link to the NIH summary of their history of trying to address the smooth transition of newly minted PIs into NIH-grant funded laboratories, without much comment. Most of my Readers are probably aware by now that handwringing from the NIH about the fate of new investigators has been an occasional feature since at least the Johnson Administration. The historical website details the most well known attempts to fix the problem. From the R23 to the R29 FIRST to the New Investigator check box, to the "sudden realization"* they needed to invent a true Noob New Investigator (ESI) category, to the latest designation of the aforementioned ESIs as Early Established Investigators for continued breaks and affirmative action. It should be obvious from the ongoing reinvention of the wheel that the NIH periodically recognizes that the most recent fix isn't working (and may have unintended detrimental consequences).
One of the reasons these attempts never truly work and have to be adjusted or scrapped and replaced by the next fun new attempt was identified by Zerhouni (a prior NIH Director) in about 2007. This was right after the "sudden realization" and the invention of the ESI. Zerhouni was quoted in a Science news bit as saying that study sections were responding to the ESI special payline boost by handing out ever worsening scores to the ESI applications.
Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.
Now, I would argue that viewing this trend of worsening scores as "punishing" is at best only partially correct. We can broaden this to incorporate a simple appreciation that study sections adapt their biases, preferences and evolved cultural ideas about grant review to the extant rules. One way to view worsening ESI scores may have to do with the pronounced tendency reviewers have to think in terms of fund it / don't fund it, despite the fact that SROs regularly exhort them not to do this. When I was on study section regularly, the scores tended to pile up around the perceived payline. I've seen the data for one section across multiple rounds. Reviewers were pretty sensitive to the scuttlebutt about what sort of score was going to be a fundable one. So it would be no surprise whatsoever to me if there was a bias driven by this tendency, once it was announced that ESI applications would get a special (higher) payline for funding.
This tendency might also be driven in part by a "Get in line, youngun, don't get too big for your britches" phenomenon. I've written about this tendency a time or two. I came up as a postdoc towards the end of the R29 / FIRST award era and got a very explicit understanding that some established PIs thought that newbies had to get the R29 award as their first award. Presumably there was a worsening bias against giving out an R01 to a newly minted assistant professor as their first award**, because hey, the R29 was literally the FIRST award, amirite?
Then we come to hazing, which is the even nastier relative of the "Don't get to big for your britches". Oh, nobody will admit that it is hazing, but there is definitely a subcurrent of this in the review behavior of some people that think that noob PIs have to prove their worth by battling the system. If they sustain the effort to keep coming back with improved versions, then hey, join the club kiddo! (Here's an ice pack for the bruising). If the PI can't sustain the effort to submit a bunch of revisions and new attempts, hey, she doesn't really have what it takes, right? Ugh.
Scientific gate-keeping. This tends to cover a multitude of sins of various severity but there are definitely reviewers that want newcomers to their field to prove that they belong. Is this person really an alcohol researcher? Or is she just going to take our*** money and run away to do whatever basic science amazeballs sounded super innovative to the panel?
Career gate-keeping. We've gone many rounds on this one within the science blog- and twittospheres. Who "deserves" a grant? Well, reviewers have opinions and biases and despite their best intentions and wounded protestations...these attitudes affect review. In no particular order we can run down the favorite targets of the "Do it to Julia, not me, JULIA!" sentiment. Soft money job categories. High overhead Universities. Well funded labs. Translational research taking all the money away from good honest basic researchers***. Elite coastal Universities. Big Universities. R1s. The post-normative-retirement crowd. Riff-raff plodders.
Layered over the top of this is favoritism. It interacts with all of the above, of course. If some category of PI is to be discriminated against, there is very likely someone getting the benefit. The category of which people approve. Our club. Our kind. People who we like who must be allowed to keep their funding first, before we let some newbie get any sniff of a grant.
This, btw, is a place where the focus must land squarely on Program Officers as well. The POs have all the same biases mentioned above, of course. And their versions of the biases have meaningful impact. But when it comes to thought of "we must save our long term investigators" they have a very special role to play in this debacle. If they are not on board with the ESI worries that keep Collins and Lauer awake at night, well, they are ideally situated to sabotage the effort. Consciously or not.
So, Director Collins and Deputy Director Lauer, you have to fix study section and you have to fix Program if you expect to have any sort of lasting change.
I have only a few suggestions and none of this is a silver bullet.
I remain convinced that the only tried and true method to minimize the effects of biases (covert and overt) is the competition of opposing biases. I've remarked frequently that study sections would be improved and fairer if less-experienced investigators had more power. I think the purge of Assistant Professors effected by the last head of the CSR (Scarpa) was a mistake. I note that CSR is charged with balancing study sections on geography, sex, ethnicity, university type and even scientific subdomains...while explicitly discriminating against younger investigators. Is it any wonder if there is a problem getting the newcomers funded?
I suggest you also pay attention to fairness. I know you won't, because administrators invariably respond to a situation of perceived past injustice with "ok, that was the past and we can't do anything about it, moving forward please!". But this is going to limit your ability to shift the needle. People may not agree on what represents fair treatment but they sure as heck are motivated by fairness. Their perception of whether a new initiative is fair or unfair will tend to shape their behavior when reviewing. This can get in the way of NIH's new agenda if reviewers perceive themselves as being mistreated by it.
Many of the above mentioned reviewer quirks are hardened by acculturation. PIs who are asked to serve on study section have been through the study section wringer as newbies. They are susceptible to the idea that it is fair if the next generation has it just about as hard as they did and that it is unfair if newbies these days are given a cake walk. Particularly, if said established investigators feel like they are still struggling. Ahem. It may not seem logical but it is simple psychology. I anticipate that the "Early Established Investigator" category is going to suffer the same fate as the ESI category. Scores will worsen, compared to pre-EEI days. Some of this will be the previously mentioned tracking of scores to the perceived payline. But some of this will be people**** who missed the ESI assistance who feel that it is unfair that the generation behind them gets yet another handout to go along with the K99/R00 and ESI plums. The intent to stabilize the careers of established investigators is a good one. But limiting this to "early" established investigators, i.e., those who already enjoyed the ESI era, is a serious mistake.
I think Lauer is either aware, or verging on awareness, of something that I've mentioned repeatedly on this blog. I.e. that a lot of the pressure on the grant system- increasing numbers of applications, PIs seemingly applying greedily for grants when already well funded, they revision queuing traffic pattern hold - comes from a vicious cycle of the attempt to maintain stable funding. When, as a VeryEstablished colleague put it to me suprisingly recently "I just put in a grant when I need another one and it gets funded" is the expected value, PIs can be efficient with their grant behavior. If they need to put in eight proposals to have a decent chance of one landing, they do that. And if they need to start submitting apps 2 years before they "need" one, the randomness is going to mean they seem overfunded now and again. This applies to everyone all across the NIH system. Thinking that it is only those on their second round of funding that have this stability problem is a huge mistake for Lauer and Collins to be making. And if you stabilize some at the expense of others, this will not be viewed as fair. It will not be viewed as shared pain.
If you can't get more people on board with a mission of shared sacrifice, or unshared sacrifice for that matter, then I believe NIH will continue to wring its hands about the fate of new investigators for another forty years. There are too many applicants for too few funds. It amps up the desperation and amps up the biases for and against. It decreases the resistance of peer reviewers to do anything to Julia that they expect might give a tiny boost to the applications of them and theirs. You cannot say "do better" and expect reviewers to change, when the power of the grant game contingencies is so overwhelming for most of us. You cannot expect program officers who still to this day appear entirely clueless about they way things really work in extramural grant-funded careers to suddenly do better because you are losing sleep. You need to delve into these psychologies and biases and cultures and actually address them.
I'll leave you with an exhortation to walk the earth, like Caine. I've had the opportunity to watch some administrative frustration, inability and nervousness verging on panic in the past couple of years that has brought me to a realization. Management needs to talk to the humblest of their workforce instead of the upper crust. In the case of the NIH, you need to stop convening preening symposia from the usual suspects, taking the calls of your GlamHound buddies and responding only to reps of learn-ed societies. Walk the earth. Talk to real applicants. Get CSR to identify some of your most frustrated applicants and see what is making them fail. Find out which of the apparently well-funded applicants have to work their tails off to maintain funding. Compare and contrast to prior eras. Ask everyone what it would take to Fix the NIH.
Of course this will make things harder for you in the short term. Everyone perceives the RealProblem as that guy, over there. And the solutions that will FixTheNIH are whatever makes their own situation easier.
But I think you need to hear this. You need to hear the desperation and the desire most of us have simply to do our jobs. You need to hear just how deeply broken the NIH award system is for everyone, not just the ESI and EEI category.
PS. How's it going solving the problem identified by Ginther? We haven't seen any data lately but at last check everything was as bad as ever so...
PPS. Are you just not approving comments on your blog? Or is this a third rail issue nobody wants to comment on?
*I make fun of the "sudden realization" because it took me about 2 h of my very first study section meeting ever to realize that "New Investigator" checkbox applicants from genuine newbies did very poorly and all of these were being scooped up by very well established and accomplished investigators who simply hadn't been NIH funded. Perhaps they were from foreign institutions, now hired in the US. Or perhaps lived on NSF or CDC or DOD awards. The idea that it took NIH something like 8-10 years to realize this is difficult to stomach.
**The R29 was crippled in terms of budget, btw. and had other interesting features.
****Yep, that would be my demographic.
Apr 30 2018
Someone on the twitts posted an objection:
All applicants for faculty positions at UCSD now required to submit a Contribution to Diversity Statement (aka Ideological Conformity Statements/Pledge of Allegiance to Left-Liberal Orthodoxy Statements) @chsommers @asheschowhttps://t.co/km67bbj4zn
— Mark J. Perry (@Mark_J_Perry) April 29, 2018
to UCSD's policy of requiring applicants for faculty positions to supply a Statement of Contribution to Diversity with their application.
Mark J Perry linked to his own blog piece posted at the American Enterprise Institute* with the following observation:
All applicants for faculty positions at UCSD now required to submit a Contribution to Diversity Statement (aka Ideological Conformity Statements/Pledge of Allegiance to Left-Liberal Orthodoxy Statements)
Then some other twitter person chimed in with opinion on how this policy was unfair because it was so difficult for him to help his
postdocs students with it.
It’s a deeply terrible idea to require this. If a student of mine was applying for faculty job there wouldn’t have the faintest idea how to advise them on this. Burden shouldn’t be on applicants https://t.co/fX89QMY3Mw
— Colin Camerer (@CFCamerer) April 29, 2018
Huh? A simple google search lands us on UCSD's page on this topic.
The Contributions to Diversity Statement should describe your past efforts, as well as future plans to advance diversity, equity and inclusion. It should demonstrate an understanding of the barriers facing women and underrepresented minorities and of UC San Diego’s mission to meet the educational needs of our diverse student population.
The page has links to a full set of guidelines [PDF] as well as specific examples in Biology, Engineering and Physical Sciences (hmm, I wonder if these are the disciplines they find need the most help?). I took a look at the guidelines and examples. It's pretty easy sailing. Sorry, but any PI who is complaining that they cannot help their postdocs figure out how to write the required statement are
lying being disingenuous. What they really mean is that they disagree with having to prepare such a statement at all.
Like this guy Bieniasz, points for honesty:
The UCSD statement instructions (Part A) read like a test of opinions/ideology. Not appropriate for a faculty application
— Paul Bieniasz (@PaulBieniasz) April 30, 2018
I am particularly perplexed with this assertion that "The UCSD statement instructions (Part A) read like a test of opinions/ideology. Not appropriate for a faculty application".
Ok, so is it a test of opinion/ideology? Let's go to the guidelines provided by UCSD.
Describe your understanding of the barriers that exist for historically under-represented groups in higher education and/or your field. This may be evidenced by personal experience and educational background. For purposes of evaluating contributions to diversity, under-represented groups (URGs) includes under-represented ethnic or racial minorities (URM), women, LGBTQ, first-generation college, people with disabilities, and people from underprivileged backgrounds.
Pretty simple. Are you able to understand facts that have been well established in academia? This only asks you to describe your understanding. That's it. If you are not aware of any of these barriers *cough*Ginther*cough*cough*, you are deficient as a candidate for a position as a University professor.
So the first part of this is merely asking if the candidate is aware of things about academia that are incredibly well documented. Facts. These are sort of important for Professors and any University is well within it's rights to probe factual knowledge. This part does not ask anything about causes or solutions.
Now the other parts do ask you about your past activities and future plans to contribute to diversity and equity. Significantly, it starts with this friendly acceptance: "Some faculty candidates may not have substantial past activities. If such cases, we recommend focusing on future plans in your statement.". See? It isn't a rule-out type of thing, it allows for candidates to realize their deficits right now and to make a statement about what they might do in the future.
Let's stop right there. This is not different in any way to the other major components of a professorial hire application package. For most of my audience, the "evidence of teaching experience and philosophy" is probably the more understandable example. Many postdocs with excellent science chops have pretty minimal teaching experience. Is it somehow unfair to ask them about their experience and philosophy? To give credit for those with experience and to ask those without to have at least thought about what they might do as a future professor?
Is it "liberal orthodoxy" if a person who insists that teaching is a waste of time and gets in the way of their real purpose (research) gets pushed downward on the priority list for the job?
What about service? Is it rude to ask a candidate for evidence of service to their Institutions and academic societies?
Is it unfair to prioritize candidates with a more complete record of accomplishment than those without? Of course it is fair.
What about scientific discipline, subfield, research orientations and theoretical underpinnings? Totally okay to ask candidates about these things.
Are those somehow "loyalty pledges"? or a requirement to "conform to orthodoxy"?
If they are, then we've been doing that in the academy a fair bit with nary a peep from these right wing think tank types.
*"Mark J. Perry is concurrently a scholar at AEI and a professor of economics and finance at the University of Michigan's Flint campus." This is a political opinion-making "think-tank" so take that into consideration.
Feb 07 2018
Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.
The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.
NIH Grant review is an inherently conservative process.
The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.
The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.
Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.
This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.
So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.
When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.
My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.
We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!
At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?
I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.
At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?
*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.
***No idea what this threshold should be, btw. But I think there is one.