Simple truth of the recentEbola hysteria and the ensuing media coverage of scientists working on hemorrhagic viruses. Approximately 85% of bioscience now wishing ill on a whole lot of people so as to draw attention to their scientific domain.
Archive for the 'NIH' category
Does "appropriately ambitious" on an ESI grant mean "ambitious enough" (go big) or "easy there, tiger" (don't overreach)?
— Michael Hendricks (@MHendr1cks) August 22, 2014
It is always good to remember that sometimes comments in the written critique are not directed at the applicant.
Technically, of course these comments are directed at Program Staff in an advisory capacity. Not to help the applicant in any way whatsoever- assistance in revising is a side effect.
Still a comment that opposes a Stock Criticism is particularly likely to be there for the consumption of either Program or the other reviewers.
It is meant to preempt the Stock Criticism when the person making the comment lies the grant.
One key to determining the right study section to request is to look on RePORTER for funded grants reviewed in your study sections of interest.
Sometimes this is much more informative than the boilerplate description of the study section listed at CSR.
A news bit in Nature overviews Richard Nakamura's plans to investigate the disparity in grant funding that was identified in the Ginther report in 2011. See here, here, here for blog comment on Gither et al. Nakamura is, of course, the current director (pdf) of the Center for Scientific Review, the entity at NIH that conducts the peer review of most grant applications that are submitted.It is promising-ish. Nakamura's plans are summarized.
One basic issue that the NIH will address is whether grant reviewers are thinking about an applicant’s race at all, even unconsciously. A team will strip names, racial identification and other identifying information from some proposals before reviewers see them, and look at what happens to grant scores.
Hope they check on the degree to which the blinding works, of course. As you know, Dear Reader, I am always concerned that blinding of academic works cannot always be assumed to have functionally worked to prevent the reviewer from identifying the author or lab group.
The NIH will also study reviewers’ work in finer detail, by analysing successful applications for R01 grants, the NIH’s largest funding programme for individual investigators. The goal is to see whether researchers can spot trends in the language used by reviewers to describe proposals put forward by applicants of different races. There is precedent for detectable differences: in a paper to be published in Academic Medicine, a team led by Molly Carnes, a physician at the University of Wisconsin-Madison, used automated text analysis to show that reviewers’ critiques of R01 grant applications by women tended to include more words denoting praise, as though the writer is surprised at the quality of the work.
Very intriguing contribution to the analysis. Nice.
The NIH will also analyse text in samples of reviewers’ unedited critiques. The Center for Scientific Review typically edits the wording and grammar of these reviews before grant proposals are returned to applicants, but even the subtlest details of such raw comments might hold clues about bias. Nakamura says that reviewers will not be told whether their comments will be analysed, because that in itself would bias the sample. “We want them to be sloppy,” he says.
Hmmm. I guess this is just human factors checking on the automated analysis. Together they are stronger.
The NIH’s Study Sections, in which review groups discuss the top 50% of grant applications, might also harbour bias: the 2011 Science paper found that submissions authored by African Americans are less likely to be discussed in the meetings. But when they are, a negative comment arising from even one person’s unconscious bias could have a major impact in such a group setting, says John Dovidio, a psychologist at Yale University in New Haven, Connecticut, and a member of the NIH’s Diversity Working Group. “That one person can poison the environment,” he says.
This is not presented in a context that suggests the NIH plans to investigate this directly. Not sure how this could be done without putting a severe finger on the balance. I mean sure, most meetings have call-in lines open to Program staff and Nakamura could just record and transcribe meetings but.....reviewers won't like it and if you warn them you scare off the fishies. So to speak.
Nakamura expects that the NIH’s effort to identify and root out prejudice, which he says could cost up to $5 million over three years, might prove controversial. “People resent the implication they might be biased,” he says — an idea borne out by some responses to his 29 May blogpost on the initiative. One commenter wrote, “It is absolutely insulting to be accused of review bigotry. Please tell me why I should continue to give up my time to perform peer review?”
But Nakamura believes that the NIH — and reviewers — need to keep open minds.
He, and media covering this, need to focus on the opportunity to communicate that institutional racism does not hinge on whether individual actors are overtly biased. The piece leads with the comment that Nakamura got his butt over to the Implicit Association website and identified his own biases to himself. This is the sort of introspection that needs to happen. Heck, I'd love to see a trial where study section reviewers were told to go over there and complete a few tests prior to receiving their grant assignments*.
The Nature editorial is, for the most part, on the side of goodness and light on this.
The idea that scientists who volunteer time and energy to review NIH grants could be biased against qualified minority researchers is a tough pill to swallow. The NIH is to be commended for not sweeping this possibility under the rug: it has turned to the scientific method to investigate the suggestion.
It is a topic that the NIH will need to broach delicately. Few academics consciously hold any such inclinations, and fewer still would deliberately allow them to affect their grant evaluations. Some are likely to bristle at what might be seen as an accusation of racism, and the NIH plans to conduct at least some of its studies of grant reviews without the reviewers’ knowledge or consent.
But better for the NIH to offend a few people than to make snap judgements and institute blunt policies to address the problem. Fixes such as increasing scholarships and training for minority groups would no doubt be a good thing, but they could be an unhelpful use of money if they do not address the root cause of the disparity.
yes, yes, excellent.....
And policies such as grant-allocation quotas could come at the expense of other researchers.
No. Bad Nature.
Right back to victim blaming. Right back to ignoring what it means to have a BIAS identified. Right back to ignoring what the nature of privilege means.
Those "other researchers" at present enjoy a disparate benefit at the expense of AfricanAmerican PIs. That's what Ginther means. Period. The onus shifted, upon identification of the disparity, to proving that non-AfricanAmerican PIs actually deserve their awards.
Ginther, btw, went a long ways toward rejecting some of the more obvious reasons why the disparity was not in fact an unfair bias. Read it, including the supplementary materials before you start commenting with stupid. Also, review this.
But there is also this. The low numbers of AfricanAmerican scientists submitting applications to the NIH for funding means that any possible hit to the success rate of non-AfricanAmerican PIs would be well nigh undetectable. A miniscule effect size relative to all other sources of variance in the funding process.
Another way to look at this issue is to take Berg's triage numbers from above. To move to 40% triage rate for the African-AmericanPI applications, we need to shift 20% (230 applications) into the discussed pile. This represents a whopping 0.4% of the White PI apps being shifted onto the triage pile to keep the numbers discussed the same.
These are entirely trivial numbers in terms of the "hit" to the chances of White PIs and yet you could easily equalize the success rate or award probability for African-American PIs.
It is even more astounding that this could be done by picking up African-American PI applications that scored better than the White PI applications that would go unfunded to make up the difference.
And of course "grant-allocation quotas" are precisely what the special paylines and other assists for ESI investigators consist of. Affirmative Action for the young and untried.
Did we get this sort of handwringing, call for long-duration "study" of the "true causes" of the disparity?
The NIH just started picking up ESI grants to balance the odds of funding, even when study sections responded to news of this affirmative action by further punishing ESI scores!
So yeah, my call is for the NIH to balance the funding rates first, and then do all their fancy studies to root out the "real cause" later.
Also for editorial teams like the one at Nature to stop repeating this whinging about those who already enjoy disparate privilege who might lose (some of) it.
*yeah, it might backfire. that would itself be interesting.
A question to the blog asked the perennial concern that is raised every time I preach on about submitting a lot of proposals. How does one have enough ideas for that? My usual answer is a somewhat perplexed inability to understand how other scientists do not have more ideas in a given six month interval than they can possible complete in the next 20 years.
I reflected slightly more that usual today and thought of something.
There is one tendency of new grant writers that can be addressed here.
My experience is that early grant writers have a tendency to write a 10 year program of research into their initial R01s. It is perfectly understandable and I've done it myself. Probably still fall into this now and again. A Stock Critique of "too ambitious" is one clue that you may need to think about whether you are writing a 10 year research program rather than a 5 year, limited dollar figure, research project that fits within a broader programmatic plan.
One of the key developments as a grant writer, IMNSHO, is to figure out how to write a streamlined, minimalist grant that really is focused on a coherent project.
When you are doing this properly, it leaves an amazing amount of additional room to write additional, highly-focused proposals on, roughly speaking, the same topic.
Important questions from Paul Knoepfler:
In today’s transactional dominated world, scientists are spending an increasing proportion of their time basically fundraising. Writing grants. Honing grantsmanship. Doing experiments specifically for grant preliminary data rather than driven by transformative ideas. Working the philanthropy side of things.
By contrast, transformative activities would include these kinds of things: reading, thinking, teaching, mentoring, model building, listening to others, doing risky pilot experiments, etc.
So are you as transformative a scientist as you think or has transactional science become a dominant vein in your daily professional life? How is this playing out more generally in science?
Can you have the best of both worlds to be transformative and transactional?
I think the answer to the last question is that sure, one can be both transformative and transactional...and even still have fun in the lab. It is possible.
Is it better to spend less time raising funds? Better to spend less time working for preliminary data and more time working to get the paper closed out?
Nobody is in this merely to raise support for their lab.
But to ask this question is to be in denial.
The question has a bit of the upraised nose sniff to it. A bit of a slap at those who are in a situation in which raising laboratory funding looms large right now. It is a pat on the back for those who happen to be flush with cash and can go back to thinking about fun science for a little while.
My problem is that rewarding the people who don't have to work for their support very hard with more easy support just hardens the silo around a lucky few and makes it even harder for the rest of those poor chumps.
We (and here I mean Francis Collins and his comments on HHMI-like support for a select few) continue to think that success is the province of the brilliant deserving few. This gets in the way of recognizing that it is the outcome of giving any of a number of deserving someones the chance to succeed. It therefore, has the potential to give us even less bang for our funding buck since a select few are unlikely over the long haul to be as creative as a crowd would be.
I ran across a curious finding in a very Glamourous publication. Being that it was in a CNS journal, the behavior sucked. The data failed to back up the central claim about that behavior*. Which was kind of central to the actual scientific advance of the entire work.
So I contemplated an initial, very limited check on the behavior. A replication of the converging sort.
It's going to cost me about $15K to do it.
If it turns out negative, then where am I? Where am I going to publish a one figure tut-tut negative that flies in the face of a result published in CNS?
If it turns out positive, this is almost worse. It's a "yeah we already knew that from this CNS paper, dumbass" rejection waiting to happen.
Either way, if I expect to be able to publish in even a dump journal I'm gong to need to throw some more money at the topic. I'd say at least $50K.
Spent from grants that are not really related to this topic in any direct way.
If the NIH is serious about the alleged replication problem then it needs to be serious about the costs and risks involved.
*a typical problem with CNS pubs that involve behavioral studies.
A specific issue that recently has recently created interesting conversations in the blogosphere is whether female K99/R00 awardees were less likely to receive a subsequent R01 award compared to male K99/R00 awardees. We at NIH have also found this particular outcome among K99/R00 PIs and have noted that those differences again stem from differential rates of application. Of the 2007 cohort of K99 PIs, 86 percent of the men had applied for R01s by 2013, but only 69 percent of the women had applied.
She's referring here to a post over at DataHound ("K99-R00 Evaluation: A Striking Gender Disparity") which observed:
Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).
To investigate this further, I looked at the two cohorts separately. For the FY2007 cohort, 70 of the 108 men (65%) with R00 awards have received R01 grants whereas only 31 of the 62 women (50%) have (P value = 0.07). For the FY2008 cohort, 44 of the 93 men (47%) with R00 awards have received R01s whereas only 22 of the 65 women (34%) have (P value = 0.10). The lack of statistical significance is due to the smaller sample sizes for the cohorts separately rather than any difference in the trends for the separate cohorts, which are quite similar.
And Rockey isn't even giving us the data on the vigor with which a R00 holder is seeking R01 funding. That may or may not make the explanation even stronger.
Seems to me that any mid or senior level investigators who have new R00-holding female assistant professors in their department might want to make a special effort to encourage them to submit R01 apps early and often.
Work at it.
Continue Reading »
What fraction of the stuff proposed in funded grants actually gets done after feasibility and field movement come to play?