Archive for the 'NIH' category

Startup Funds That Expire On Grant Award

Jul 11 2018 Published by under Academics, Ask DrugMonkey, NIH Careerism

From the email bag:

My question is: Should institutions pull back start-up funds from new PIs if R01s or equivalents are obtained before funds are burned? Should there be an expiration date for these funds?

Should? Well no, in the best of all possible worlds of course we would wish PIs to retain all possible sources of support to launch their program.

I can, however, see the institutional rationale that startup is for just that, starting. And once in the system by getting a grant award, the thinking goes, a PI should be self-sustaining. Like a primed pump.

And those funds would be better spent on starting up the next lab's pump.

The expiration date version is related, and I assume is viewed as an inducement for the PI to go big or go home. To try. Hard. Instead of eking it out forever to support a lab that is technically in operation but not vigorously enough to land additional extramural funding.

Practically speaking the message from this is to always check the details for a startup package. And if it expires on grant award, or after three years, this makes it important to convert as much of that startup into useful Preliminary Data as possible. Let it prime many pumps.

Thoughts, folks? This person was wondering if this is common. How do your departments handle startup funds?

10 responses so far

Trophy collaborations

Jul 05 2018 Published by under Conduct of Science, NIH, NIH Careerism

Jason Rasgon noted a phenomenon where one is asked to collaborate on a grant proposal but is jettisoned after funding of the award:

I'm sure there are cases where both parties amicably terminate the collaboration but the interesting case is where the PI or PD sheds another investigator without their assent.

Is this common? I can't remember hearing many cases of this. It has happened to me in a fairly minor way once but then again I have not done a whole lot of subs on other people's grants.

17 responses so far

Your Grant in Review: Scientific Premise

Jul 03 2018 Published by under NIH Careerism, NIH funding

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was...great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don't know about all of y'all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don't need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

2 responses so far

On PO/PI interactions to steer the grant to the PI's laboratory

Jun 18 2018 Published by under Alcohol, NIH, NIH funding

There has been a working group of the Advisory Committee to the Director (of NIH, aka Francis Collins) which has been examining the Moderate Alcohol and Cardiovascular Health Trial in the wake of a hullabaloo that broke into public earlier this year. Background on this from Jocelyn Kaiser at Science, from the NYT, and the WaPo. (I took up the sleazy tactics of the alleged profession of journalism on this issue here.)

The working group's report is available now [pdf].

Page 7 of that report:

There were sustained interactions (from at least 2013) between the eventual Principal Investigator (PI) of the MACH trial and three members of NIAAA leadership prior to, and during development of, FOAs for planning and main grants to fund the MACH trial

These interactions appear to have provided the eventual PI with a competitive advantage not available to other applicants, and effectively steered funding to this investigator

Page 11:

NIH Institutes, Centers, and Offices (ICOs) should ensure that program staff do not inappropriately provide non-public information, or engage in deliberations that either give the appearance of, or provide, an advantage to any single, or subset of, investigator(s)

The NIH should examine additional measures to assiduously avoid providing, or giving the appearance of providing, an advantage to any single, or subset of, investigator(s) (for example, in guiding the scientific substance of preparing grant applications or responding to reviewer comments)

The webcast of the meeting of the ACD on Day 2 covers the relevant territory but is not yet available in archived format. I was hoping to find the part where Collins apparently expressed himself on this topic, as described here.

In the wake of the decision, Collins said NIH officials would examine other industry-NIH ties to make sure proper procedures have been followed, and seek out even “subtle examples of cozy relationships” that might undermine research integrity.

When I saw all of this I could only wonder if Francis Collins is familiar with the RFA process at the NIH.

If you read RFAs and take the trouble to see what gets funded out of them you come to the firm belief that there are a LOT of "sustained interactions" between the PO(s) that are pushing the RFA and the PI that is highly desired to be the lucky awardee. The text of the RFAs in and of themselves often "giv(e) the appearance of providing, and advantage to any single, or subset of, investigator(s)". And they sure as heck provide certain PIs with "a competitive advantage not available to other applicants".

This is the way RFAs work. I am convinced. It is going to take on huge mountain of evidence to the contrary to counter this impression which can be reinforced by looking at some of the RFAs in your closest fields of interest and seeing who gets funded and for what. If Collins cares to include failed grant applications from those PIs that lead up to the RFA being generated (in some cases) I bet he finds that this also supports the impression.

I really wonder sometimes.

I wonder if NIH officialdom is really this clueless about how their system works?

...or do they just have zero compunction about dissembling when they know full well that these cozy little interactions between PO and favored PI working to define Funding Opportunity Announcements are fairly common?

__
Disclaimer: As always, Dear Reader, I have related experiences. I've competed unsuccessfully on more than one occasion for a targeted FOA where the award went to the very obvious suspect lab. I've also competed successfully for funding on a topic for which I originally sought funding under those targeted FOAs- that takes the sting out. A little. I also suspect I have at least once received grant funding that could fairly be said to be the result of "sustained interactions" between me and Program staff that provided me "a competitive advantage" although I don't know the extent to which this was not available to other PIs.

10 responses so far

NIH Ginther Fail: Do the ersatz reviews recapitulate the original reviews?

A bit in Science authored by Jocelyn Kaiser recently covered the preprint posted by Forscher and colleagues which describes a study of bias NIH grant review. I was struck by a response Kaiser obtained from one of the authors on the question of range restriction.

Some have also questioned Devine’s decision to use only funded proposals, saying it fails to explore whether reviewers might show bias when judging lower quality proposals. But she and Forscher point out that half of the 48 proposals were initial submissions that were relatively weak in quality and only received funding after revisions, including four that were of too low quality to be scored.

They really don't seem to understand NIH grant review where about half of all proposals are "too low quality to be scored". Their inclusion of only 8% ND applications simply doesn't cut it. Thinking about this, however, motivated me to go back to the preprint, follow some links to associated data and download the excel file with the original grant scores listed.

I do still think they are missing a key point about restriction of range. It isn't, much as they would like to think, only about the score. The score on a given round is a value with considerable error, as the group itself described in a prior publication in which the same grant reviewed in different ersatz study sections ended up with a different score. If there is a central tendency for true grant score, which we might approach with dozens of reviews of the same application, then sometimes any given score is going to be too good, and sometimes too bad, as an estimate of the central tendency. Which means that on a second review, the score for the former are going to tend to get worse and the scores for the latter are going to tend to get better. The authors only selected the ones that tended to get better for inclusion (i.e., the ones that reached funding on revision).

Anther way of getting at this is to imagine two grants which get the same score in a given review round. One is kinda meh, with mostly reasonable approaches and methods from a pretty good PI with a decent reputation. The other grant is really exciting, but with some ill considered methodological flaws and a missing bit of preliminary data. Each one comes back in revision with the former merely shined up a bit and the latter with awesome new preliminary data and the methods fixed. The meh one goes backward (enraging the PI who "did everything the panel requested") and the exciting one is now in the fundable range.

The authors have made the mistake of thinking that grants that are discussed, but get the same score well outside the range of funding, are the same in terms of true quality. I would argue that the fact that the "low quality" ones they used were revisable into the fundable range makes them different from the similar scoring applications that did not eventually win funding.

In thinking about this, I came to realize another key bit of positive control data that the authors could provide to enhance our confidence in their study. I scanned through the preprint again and was unable to find any mention of them comparing the original scores of the proposals with the values that came out of their study. Was there a tight correlation? Was it equivalently tight across all of their PI name manipulations? To what extent did the new scores confirm the original funded, low quality and ND outcomes?

This would be key to at least partially counter my points about the range of applications that were included in this study. If the test reviewer subjects found the best original scored grants to be top quality, and the worst to be the worst, independent of PI name then this might help to reassure us that the true quality range within the discussed half was reasonably represented. If, however, the test subjects often reviewed the original top grants lower and the lower grants higher, this would reinforce my contention that the range of the central tendencies for the quality of the grant applications was narrow.

So how about it, Forscher et al? How about showing us the scores from your experiment for each application by PI designation along with the original scores?
__
Patrick Forscher William Cox Markus Brauer Patricia Devine, No race or gender bias in a randomized experiment of NIH R01 grant reviews. Created on: May 25, 2018 | Last edited: May 25, 2018; posted on PsyArXiv

3 responses so far

NIH Ginther Fail: This is not anything like real grant review

May 31 2018 Published by under Fixing the NIH, NIH, Underrepresented Groups

I recently discussed some of the problems with a new pre-print by Forscher and colleagues describing a study which purports to evaluate bias in the peer review of NIH grants.

One thing that I figured out today is that the team that is funded under the grant which supported the Forscher et al study also produced a prior paper that I already discussed. That prior discussion focused on the use of only funded grants to evaluate peer review behavior, and the corresponding problems of a restricted range. The conclusion of this prior paper was that reviewers didn't agree with each other in the evaluation of the same grant. This, in retrospect, also seems to be a design that was intended to fail. In that instance designed to fail to find correspondence between reviewers, just as the Forscher study seems constructed to fail to find evidence of bias.

I am working up a real distaste for the "Transformative" research project (R01 GM111002; 9/2013-6/2018) funded to PIs M. Carnes and P. Devine that is titled EXPLORING THE SCIENCE OF SCIENTIFIC REVIEW. This project is funded to the tune of $465,804 in direct costs in the final year and reached as high as $614,398 direct in year 3. We can, I think, fairly demand a high standard for the resulting science. I do not think this team is meeting a high standard.

One of the papers (Pier et al 2017) produced by this project discusses the role of study section discussion in revising/calibrating initial scoring.

Results suggest that although reviewers within a single panel agree more following collaborative discussion, different panels agree less after discussion, and Score Calibration Talk plays a pivotal role in scoring variability during peer review.

So they know. They know that scores change through discussion and they know that a given set of applications can go in somewhat different directions based on who is reviewing. They know that scores can change depending on what other ersatz panel members are included and perhaps depending on how the total number of grants are distributed to reviewers in those panels. The study described in the Forscher pre-print did not convene panels:

Reviewers were told we would schedule a conference call to discuss the proposals with other reviewers. No conference call would actually occur; we informed the prospective reviewers of this call to better match the actual NIH review process.

Brauer is an overlapping co-author. The senior author on the Forscher study is Co-PI, along with the senior author of the Pier et al. papers, on the grant that funds this work. The Pier et al 2017 Res Eval paper shows that they know full well that study section discussion is necessary to "better match the actual NIH review process". Their paper shows that study section discussion does so in part by getting better agreement on the merits of a particular proposal across the individuals doing the reviewing (within a given panel). By extension, not including any study section type discussion is guaranteed to result in a more variable assessment. To throw noise into the data. Which has a tendency to make it more likely that a study will arrive at a null result, as the Forscher et al study did.

These investigators also know that the grant load for NIH reviewers is not typically three applications, as was used in the study described in the Forscher pre-print. From Pier et al 2017 again:

We further learned that although a reviewer may be assigned 9–10 applications for a standing study section, ad hoc panels or SEPs can receive assignments as low as 5–6 applications; thus, the SRO assigned each reviewer to evaluate six applications based on their scientific expertise, as we believed a reviewer load on the low end of what is typical would increase the likelihood of study participation.

I believe that the reviewer load is critically important if you are trying to mimic the way scores are decided by the NIH review process. The reason is that while several NIH documents and reviewer guides pay lipservice to the idea that the review of each grant proposal is objective, the simple truth is that review is comparative.

Grant applications are scored on a 1-9 scale with descriptors ranging from Exceptional (1) to Very Good (4) to Poor (9). On an objective basis, I and many other experienced NIH grant reviewers argue, the distribution of NIH grant applications (all of them) is not flat. There is a very large peak around the Excellent to Very Good (i.e., 3-4) range, in my humble estimation. And if you are familiar with review you will know that there is a pronounced tendency of reviewers, unchecked, to stack their reviews around this range. They do it within reviewer and they do it as a panel. This is why the SRO (and Chair, occasionally) spends so much time before the meeting exhorting the panel members to spread their scores. To flatten the objective distribution of merit into a more linear set of scores. To, in essence, let a competitive ranking procedure sneak into this supposedly objective and non-comparative process.

Many experienced reviewers understand why this is being asked of them, endorse it as necessary (at the least) and can do a fair job of score spreading*.

The fewer grants a reviewer has on the immediate assignment pile, the less distance there need be across this pile. If you have only three grants and score them 2, 3 and 4, well hey, scores spread. If, however, you have a pile of 6 grants and score them 2, 3, 3, 3, 4, 4 (which is very likely the objective distribution) then you are quite obviously not spreading your scores enough. So what to do? Well, for some reason actual NIH grant reviewers are really loathe to throw down a 1. So 2 is the top mark. Gotta spread the rest. Ok, how about 2, 3, 3...er 4 I mean. Then 4, 4...shit. 4, 5 and oh 6 seems really mean so another 5. Ok. 2, 3, 4, 4, 5, 5. phew. Scores spread, particularly around the key window that is going to make the SRO go ballistic.

Wait, what's that? Why are reviewers working so hard around the 2-4 zone and care less about 5+? Well, surprise surprise that is the place** where it gets serious between probably fund, maybe fund and no way, no how fund. And reviewers are pretty sensitive to that**, even if they do not know precisely what score will mean funded / not funded for any specific application.

That little spreading exercise was for a six grant load. Now imagine throwing three more applications into this mix for the more typical reviewer load.

For today, it is not important to discuss how a reviewer decides one grant comes before the other or that perhaps two grants really do deserve the same score. The point is that grants are assessed against each other. In the individual reviewer's stack and to some extent across the entire study section. And it matters how many applications the reviewer has to review. This affects that reviewer's pre-discussion calibration of scores.

Read phase, after the initial scores are nominated and before the study section meets, is another place where re-calibration of scores happens. (I'm not sure if they included that part in the Pier et al studies, it isn't explicitly mentioned so presumably not?)

If the Forscher study only gave reviewers three grants to review, and did not do the usual exhortation to spread scores, this is a serious flaw. Another serious and I would say fatal flaw in the design. The tendency of real reviewers is to score more compactly. This is, presumably, enhanced by the selection of grants that were funded (either on the version that used or in revision) which we might think would at least cut off the tail of really bad proposals. The ranges will be from 2-4*** instead of 2-5 or 6. Of course this will obscure differences between grants, making it much much more likely that no effect of sex or ethnicity (the subject of the Forscher et al study) of the PI would emerge.

__
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018, https://doi.org/10.1073/pnas.1714379115

Elizabeth L. Pier, Joshua Raclaw, Anna Kaatz, Markus Brauer,Molly Carnes, Mitchell J. Nathan and Cecilia E. Ford. ‘Your comments are meaner than your score’: score calibration talk influences intra- and inter-panel variability during scientific grant peer review, Res Eval. 2017 Jan; 26(1): 1–14. Published online 2017 Feb 14. doi: 10.1093/reseval/rvw025

Patrick Forscher, William Cox, Markus Brauer, and Patricia Devine. No race or gender bias in a randomized experiment of NIH R01 grant reviews. Created on: May 25, 2018 | Last edited: May 25, 2018 https://psyarxiv.com/r2xvb/

*I have related before that when YHN was empanled on a study section he practiced a radical version of score spreading. Initial initial scores for his pile were tagged to the extreme ends of the permissible scores (this was under the old system) and even intervals within that were used to place the grants in his pile.

**as are SROs. I cannot imagine a SRO ever getting on your case to spread scores for a pile that comes in at 2, 3, 4, 5, 7, 7, 7, 7, 7.

***Study sections vary a lot in their precise calibration of where the hot zone is and how far apart scores are spread. This is why the more important funding criterion is the percentile, which attempts to adjust for such study section differences. This is the long way of saying I'm not encouraging comments naggling over these specific examples. The point should stand regardless of your pet study sections' calibration points.

10 responses so far

NIH Ginther Fail: A transformative research project

May 29 2018 Published by under Fixing the NIH, NIH, Underrepresented Groups

In August of 2011 the Ginther et al. paper published in Science let us know that African-American PIs were disadvantaged in the competition for NIH awards. There was an overall success rate disparity identified as well as a related necessity of funded PIs to revise their proposals more frequently to become funded.

Both of these have significant consequences for what science gets done and how careers unfold.

I have been very unhappy with the NIH response to this finding.

I have recently become aware of a "Transformative" research project (R01 GM111002; 9/2013-6/2018) funded to PIs M. Carnes and P. Devine that is titled EXPLORING THE SCIENCE OF SCIENTIFIC REVIEW. From the description/abstract:

Unexplained disparities in R01 funding outcomes by race and gender have raised concern about bias in NIH peer review. This Transformative R01 will examine if and how implicit (i.e., unintentional) bias might occur in R01 peer review... Specific Aim #2. Determine whether investigator race, gender, or institution causally influences the review of identical proposals. We will conduct a randomized, controlled study in which we manipulate characteristics of a grant principal investigator (PI) to assess their influence on grant review outcomes...The potential impact is threefold; this research will 1) discover whether certain forms of cognitive bias are or are not consequential in R01 peer review... the results of our research could set the stage for transformation in peer review throughout NIH.

It could not be any clearer that this project is a direct NIH response to the Ginther result. So it is fully and completely appropriate to view any resulting studies in this context. (Just to get this out of the way.)

I became aware of this study through a Twitter mention of a pre-print that has been posted on PsyArXiv. The version I have read is:

No race or gender bias in a randomized experiment of NIH R01 grant reviews. Patrick Forscher William Cox Markus Brauer Patricia Devine Created on: May 25, 2018 | Last edited: May 25, 2018

The senior author is one of the Multi-PI on the aforementioned funded research project and the pre-print makes this even clearer with a statement.

Funding: This research was supported by 5R01GM111002-02, awarded to the last author.

So while yes, the NIH does not dictate the conduct of research under awards that it makes, this effort can be fairly considered part of the NIH response to Ginther. As you can see from comparing the abstract of the funded grant to the pre-print study there is every reason to assume the nature of the study as conducted was actually spelled out in some detail in the grant proposal. Which the NIH selected for funding, apparently with some extra consideration*.

There are many, many, many things wrong with the study as depicted in the pre-print. It is going to take me more than one blog post to get through them all. So consider none of these to be complete. I may also repeat myself on certain aspects.

First up today is the part of the experimental design that was intended to create the impression in the minds of the reviewers that a given application had a PI of certain key characteristics, namely on the spectra of sex (male versus female) and ethnicity (African-American versus Irish-American). This, I will note, is a tried and true design feature for some very useful prior findings. Change the author names to initials and you can reduce apparent sex-based bias in the review of papers. Change the author names to African-American sounding ones and you can change the opinion of the quality of legal briefs. Change sex, apparent ethnicity of the name on job resumes and you can change the proportion called for further interviewing. Etc. You know the literature. I am not objecting to the approach, it is a good one, but I am objecting to its application to NIH grant review and the way they applied it.

The problem with application of this to NIH Grant review is that the Investigator(s) is such a key component of review. It is one of five allegedly co-equal review criteria and the grant proposals include a specific document (Biosketch) which is very detailed about a specific individual and their contributions to science. This differs tremendously from the job of evaluating a legal brief. It varies tremendously from reviewing a large stack of resumes submitted in response to a fairly generic job. It even differs from the job of reviewing a manuscript submitted for potential publication. NIH grant review specifically demands an assessment of the PI in question.

What this means is that it is really difficult to fake the PI and have success in your design. Success absolutely requires that the reviewers who are the subjects in the study both fail to detect the deception and genuinely develop a belief that the PI has the characteristics intended by the manipulation (i.e., man versus woman and black versus white). The authors recognized this, as we see from page 4 of the pre-print:

To avoid arousing suspicion as to the purpose of the study, no reviewer was asked to evaluate more than one proposal written by a non-White-male PI.

They understand that suspicion as to the purpose of the study is deadly to the outcome.

So how did they attempt to manipulate the reviewer's percept of the PI?

Selecting names that connote identities. We manipulated PI identity by assigning proposals names from which race and sex can be inferred 11,12. We chose the names by consulting tables compiled by Bertrand and Mullainathan 11. Bertrand and Mullainathan compiled the male and female first names that were most commonly associated with Black and White babies born in Massachusetts between 1974 and 1979. A person born in the 1970s would now be in their 40s, which we reasoned was a plausible age for a current Principal Investigator. Bertrand and Mullainathan also asked 30 people to categorize the names as “White”, “African American”, “Other”, or “Cannot tell”. We selected first names from their project that were both associated with and perceived as the race in question (i.e., >60 odds of being associated with the race in question; categorized as the race in question more than 90% of the time). We selected six White male first names (Matthew, Greg, Jay, Brett, Todd, Brad) and three first names for each of the White female (Anne, Laurie, Kristin), Black male (Darnell, Jamal, Tyrone), and Black female (Latoya, Tanisha, Latonya) categories. We also chose nine White last names (Walsh, Baker, Murray, Murphy, O’Brian, McCarthy, Kelly, Ryan, Sullivan) and three Black last names (Jackson, Robinson, Washington) from Bertrand and Mullainathan’s lists. Our grant proposals spanned 12 specific areas of science; each of the 12 scientific topic areas shared a common set of White male, White female, Black male, and Black female names. First names and last names were paired together pseudo-randomly, with the constraints that (1) any given combination of first and last names never occurred more than twice across the 12 scientific topic areas used for the study, and (2) the combination did not duplicate the name of a famous person (i.e., “Latoya Jackson” never appeared as a PI name).

So basically the equivalent of blackface. They selected some highly stereotypical "black" first names and some "white" surnames which are almost all Irish (hence my comment above about Irish-American ethnicity instead of Caucasian-American. This also needs some exploring.).

Sorry, but for me this heightens concern that reviewers deduce what they are up to. Right? Each reviewer had only three grants (which is a problem for another post) and at least one of them practically screams in neon lights "THIS PI IS BLACK! DID WE MENTION BLACK? LIKE REALLY REALLY BLACK!". As we all know, there are not 33% of applications to the NIH from African-American investigators. Any experienced reviewer would be at risk of noticing something is a bit off. The authors say nay.

A skeptic of our findings might put forward two criticisms: .. As for the second criticism, we put in place careful procedures to screen out reviewers who may have detected our manipulation, and our results were highly robust even to the most conservative of reviewer exclusion criteria.

As far as I can tell their "careful procedures" included only:

We eliminated from analysis 34 of these reviewers who either mentioned that they learned that one of the named personnel was fictitious or who mentioned that they looked up a paper from a PI biosketch, and who were therefore likely to learn that PI names were fictitious.

"who mentioned".

There was some debriefing which included:

reviewers completed a short survey including a yes-or-no question about whether they had used outside resources. If they reported “yes”, they were prompted to elaborate about what resources they used in a free response box. Contrary to their instructions, 139 reviewers mentioned that they used PubMed or read articles relevant to their assigned proposals. We eliminated the 34 reviewers who either mentioned that they learned of our deception or looked up a paper in the PI’s biosketch and therefore were very likely to learn of our deception. It is ambiguous whether the remaining 105 reviewers also learned of our deception.

and

34 participants turned in reviews without contacting us to say that they noticed the deception, and yet indicated in review submissions that some of the grant personnel were fictitious.

So despite their instructions and discouraging participants from using outside materials, significant numbers of them did. And reviewers turned in reviews without saying they were on to the deception when they clearly were. And the authors did not, apparently, debrief in a way that could definitively say whether all, most or few reviewers were on to their true purpose. Nor does there appear to be any mention of asking reviewers afterwards of whether they knew about Ginther, specifically, or disparate grant award outcomes in general terms. That would seem to be important.

Why? Because if you tell most normal decent people that they are to review applications to see if they are biased against black PIs they are going to fight as hard as they can to show that they are not a bigot. The Ginther finding was met with huge and consistent protestation on the part of experienced reviewers that it must be wrong because they themselves were not consciously biased against black PIs and they had never noticed any overt bias during their many rounds of study section. The authors clearly know this. And yet they did not show that the study participants were not on to them. While using those rather interesting names to generate the impression of ethnicity.

The authors make several comments throughout the pre-print about how this is a valid model of NIH grant review. They take a lot of pride in their design choices in may places. I was very struck by:

names that were most commonly associated with Black and White babies born in Massachusetts between 1974 and 1979. A person born in the 1970s would now be in their 40s, which we reasoned was a plausible age for a current Principal Investigator.

because my first thought when reading this design was "gee, most of the African-Americans that I know who have been NIH funded PIs are named things like Cynthia and Carl and Maury and Mike and Jean and.....dude something is wrong here.". Buuuut, maybe this is just me and I do know of one "Yasmin" and one "Chanda" so maybe this is a perceptual bias on my part. Okay, over to RePORTER to search out the first names. I'll check all time and for now ignore F- and K-mechs because Ginther focused on research awards, iirc. Darnell (4, none with the last names the authors used); LaTonya (1, ditto); LaToya (2, one with middle / maiden? name of Jones, we'll allow that and oh, she's non-contact MultiPI); Tyrone (6; man one of these had so many awards I just had to google and..well, not sure but....) and Tanisha (1, again, not a president surname).

This brings me to "Jamal". I'm sorry but in science when you see a Jamal you do not think of a black man. And sure enough RePORTER finds a number of PIs named Jamal but their surnames are things like Baig, Farooqui, Ibdah and Islam. Not US Presidents. Some debriefing here to ensure that reviewers presumed "Jamal" was black would seem to be critical but, in any case, it furthers the suspicion that these first names do not map onto typical NIH funded African-Americans. This brings us to the further observation that first names may convey not merely ethnicity but something about subcategories within this subpopulation of the US. It could be that these names cause percepts bound up in geography, age cohort, socioeconomic status and a host of other things. How are they controlling for that? The authors make no mention that I saw.

The authors take pains to brag on their clever deep thinking on using an age range that would correspond to PIs in their 40s (wait, actually 35-40, if the funding of the project in -02 claim is accurate, when the average age of first major NIH award is 42?) to select the names and then they didn't even bother to see if these names appeared on the NIH database of funded awards?

The takeaway for today is that the study validity rests on the reviewers not knowing the true purpose. And yet they showed that reviewers did not follow their instructions for avoiding outside research and that reviewers did not necessarily volunteer that they'd detected the name deception*** and yet some of them clearly had. Combine this with the nature of how the study created the impression of PI ethnicity via these particular first names and I think this can be considered a fatal flaw in the study.
__

Race, Ethnicity, and NIH Research Awards, Donna K. Ginther, Walter T. Schaffer, Joshua Schnell, Beth Masimore, Faye Liu, Laurel L. Haak, Raynard Kington. Science 19 Aug 2011:Vol. 333, Issue 6045, pp. 1015-1019
DOI: 10.1126/science.1196783

*Notice the late September original funding date combined with the June 30 end date for subsequent years? This almost certainly means it was an end of year pickup** of something that did not score well enough for regular funding. I would love to see the summary statement.

**Given that this is a "Transformative" award, it is not impossible that they save these up for the end of the year to decide. So I could be off base here.

*** As a bit of a sidebar there was a twitter person who claimed to have been a reviewer in this study and found a Biosketch from a supposedly female PI referring to a sick wife. Maybe the authors intended this but it sure smells like sloppy construction of their materials. What other tells were left? And if they *did* intend to bring in LBTQ assumptions...well this just seems like throwing random variables into the mix to add noise.

DISCLAIMER: As per usual I encourage you to read my posts on NIH grant matters with the recognition that I am an interested party. The nature of NIH grant review is of specific professional interest to me and to people who are personally and professionally close to me.

23 responses so far

The Purchasing Power of the NIH Grant Continues to Erode

It has been some time since I made a figure depicting the erosion of the purchasing power of the NIH grant so this post is simply an excuse to update the figure.

In brief, the NIH modular budget system used for a lot of R01 awards limits the request to $250,000 in direct costs per year. A PI can ask for more but they have to use a more detailed budgeting process, and there are a bunch of reasons I'm not going to go into here that makes the "full-modular" a good starting point for discussion of the purchasing power of the typical NIH award.

The full modular limit was put in place at the inception of this system (i.e., for applications submitted after 6/1/1999) and has not been changed since. I've used the FY2001 as my starting point for the $250,000 and then adjusted it in two ways according to the year by year BRDPI* inflation numbers. The red bars indicate the reduction in purchasing power of a static $250,000 direct cost amount. The black bars indicate the amount the full-modular limit would have to be escalated year over year to retain the same purchasing power that $250,000 conferred in 2001.


(click to enlarge)

The executive summary is that the NIH would have to increase the modular limit to $450,000 $400,000** per year in direct costs for FY2018 in order for PIs to have the same purchasing power that came with a full-modular grant award in 2001.
__
*The BRDPI inflation numbers that I used can be downloaded from the NIH Office of Budget. The 2017 and 2018 numbers are projected.

**I blew it. The BRDPI spreadsheet actually projects inflation out to 2023 and I pulled the number from 2021 projection. The correct FY2018 equivalent is $413,020.

7 responses so far

Repost- Your Grant in Review: Competing Continuation, aka Renewal, Apps

May 11 2018 Published by under Fixing the NIH, NIH, NIH Careerism

Two recent posts discuss the topic of stabilizing NIH funding within a PI's career, triggered by a blog post from Mike Lauer and Francis Collins. In the latter, the two NIH honchos claim to be losing sleep over the uncertainty of funding in the NIH extramural granting system, specifically in application to those PIs who received funding as an ESI and are now trying to secure the next round of funding.

One key part of this, in my view, is how they (the NIH) and we (extramural researchers, particularly those reviewing applications for the NIH) think about the proper review of Renewal (formerly known as competing continuation) applications. I'm reposting some thoughts I had on this topic for your consideration.

This post originally appeared Jan 28, 2016.
___
In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

No responses yet

Stability of funding versus the project-based funding model of the NIH

May 09 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

In response to a prior post, Morgan Price wonders about the apparent contrast of NIH's recent goal to stabilize research funding and the supposed "project-based" model.

I don't see how stability based funding is consistent with project-based funding and "funding the best science". It would be a radical change...?

NIH grants are supposed to be selected and awarded on the basis of the specific project that is proposed. That is why there is such extensive detailing of a very specific area of science, well specified Specific (not General!) Aims and a listing of specific experiments.

They are not awarded on the basis of a general program of research that seems to be promising for continued funding.

Note that there are indeed mechanisms of funding that operate on the program level to much greater extent. HHMI being one of the more famous ones of these. In program based award, the emphasis is on what the investigating team (and generally this means specifically the PI) has accomplished and published in recent years. There may be some hints about what the person plans to work on next but generally the emphasis is on past performance, rather than the specific nature of the future plan.

In the recent handwringing from NIH about how investigators that they have launched with special consideration for their newcomer status (e.g., the Early Stage Investigator PI applications can be funded at lower priority scores / percentile ranks than would be needed by an established investigator.

if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere.

"Stabilizing", Morgan Price assumes is the same thing as a radical change. It is not.

Here's the trick:

The NIH funding system has always been a hybrid which pays lip service to "project based funding" as a model while blithely using substantial, but variable, input from the "program based" logic. First off, the "Investigator" criterion of proposal review is one of 5 supposedly co-equal major criteria. The Biosketch, which details the past accomplishments and skills of the PI) is prominent in the application. This Biosketch lists both papers and prior research grant support* which inevitably leads to some degree of assessment of how productive the PI was with her prior awards. This then is used to judge the merit of the proposal that is under current review - sounds just a bit like HHMI, doesn't it?

The competing continuation application (called a Renewal application now) is another NIH beast that reveals the hybrid nature of the selection system. You are allowed to ask for no more than 5 years of support for a given project, but you can then ask for successive five year extensions via competitive application review. This type of proposal has a "Progress Report" and a list of papers resulting from the project required within the application. This, quite obviously, focuses the review in large part on the past accomplishment. Now, sure, the application also has to have a detailed proposal for the next interval. Specific Aims. Experiments listed. But it also has all of the prior accomplishments pushed into the center of the review.

So what is the problem? Why are Collins and Lauer proposing to make the NIH grant selection even more based on the research program? Well, times have changed. The figure here is a bit dated by now but I like to keep refreshing your view of it because NIH has this nasty tendency to truncate their graphs to only the past decade or so. The NIH does this to obscure just how good investigators had things in the 80s. That was when established investigators enjoyed success rates north of 40%. For all applications, not just for competing renewals. Many of the people who started their careers in those wonderful days are still very much with us, by the way. This graph shows that within a few years of the end of the doubling, the success rates for established investigators had dropped to about where the new investigators were in the 1980s. Success rates have only continued to get worse but thanks to policies enacted by Zerhouni, the established and new investigator success rates have been almost identical since 2007.
Interestingly, one of the things Zerhouni had to do was to insist that Program change their exception pay behavior. (This graph was recreated from a GAO report [PDF], page down to Page 56, PDF page 60.) It is relevant because it points to yet another way that the NIH system used to prioritize program qualities over the project qualities. POs historically were much more interested in "saving" previously funded, now unfunded, labs than they were in saving not-yet-funded labs.

Now we get to Morgan Price's point about "the best science". Should the NIH system be purely project-based? Can we get the best science one 5 year plan at a time?

I say no. Five years is not enough time to spool up a project of any heft into a well honed and highly productive gig. Successful intervals of 5 year grants depend on what has come before to a very large extent. Often times, adding the next 5 years of funding via Renewal leads to an even more productive time because it leverages what has come before. Stepping back a little bit, gaps in funding can be deadly for a project. A project that has been killed off just as it is getting good is not only not the "best" science it is hindered science. A lack of stability across the NIH system has the effect of making all of its work even more expensive because something headed off in Lab 1 (due to gaps in funding) can only be started up in Lab 2 at a handicap. Sure Lab 2 can leverage published results of Lab 1 but not the unpublished stuff and not all of the various forms of expertise locked up in the Lab 1 staff's heads.

Of course if too much of the NIH allocation goes to sinecure program-based funding to continue long-running research programs, this leads to another kind of inefficiency. The inefficiencies of opportunity cost, stagnation, inflexibility and dead-woodery.

So there is a balance. Which no doubt fails to satisfy most everyone's preferences.

Collins and Lauer propose to do a bit of re-balancing of the program-based versus project-based relationship, particularly when it comes to younger investigators. This is not radical change. It might even be viewed in part as a selective restoration of past realities of grant funded science careers.

__
*In theory the PI's grants are listed on the Biosketch merely to show the PI is capable of leading a project something like the one under review. Correspondingly, it would in theory be okay to just list the most successful ones and leave out the grant awards with under-impressive outcomes. After all, do you have to put in every paper? no. Do you have to put every bit of bad data that you thought might be preliminary data into the app? no. So why do you have to** list all of your grants? This is the program-based aspects of the system at work.

**dude, you have to. this is one of those culture of review things. You will be looked up on RePORTER and woe be to you if you try to hide some project, successful or not, that has active funding within the past three years.

14 responses so far

« Newer posts Older posts »