The NIH allows non-US Universities and other institutions to apply for NIH grants. I don't pay much attention to this issue so I don't know how many.
What I am curious about is whether the PIs who review have noticed anything about these applications. How are they received in study sections you have attended? Is there a high bar for the unique environment or capabilities?
I believe I was on study section during a transition where the foreign applications were essentially treated like domestic apps to where there was intense skepticism. This was around the mid-naughties, approximately a decade ago.
I'm curious what you folks are seeing.
Archive for the 'NIH funding' category
The NIH allows non-US Universities and other institutions to apply for NIH grants. I don't pay much attention to this issue so I don't know how many.
A question on my prior post wanted to know if my assertions were official and written or not.
DM, how do you know that this is the case? I mean, I don't doubt that this is the case, but is it explicitly articulated somewhere?
This was in response to the following statements from me.
They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.
They are not charged with deciding what grants to fund!
The fact that we are not supposed to so much as mention the "f-word", i.e., "funding", has been communicated verbally by every single SRO I have ever reviewed under. They tend to do this at the opening of the meeting and sometimes in the pre-meeting introductory phone call. Many SROs of my acquaintance also spit this out like a reflex during the course of the meeting if they ever hear a reviewer mention it.
The rest of my statements are best evaluated as I wrote them. I.e., by looking at the the NIH review guidance material to see what the reviewers are instructed to do. There is a complete absence of any statements suggesting the job is to help out the applicant. There is a complete absence of any statement suggesting the job is to decide what to fund. The task is described assertively to:
Make recommendations concerning the scientific and technical merit of applications under review, in the form of final written comments and numerical scores.
As far as more positive assertions on the "fixing applications" front go, the most direct thing I can find at present is in the instruction on the "Additional Comments to Applicant" section of the critique template (take a look at that template if you've never reviwed). This document says:
As an NIH reviewer, your written critique should focus on evaluating the scientific and technical merit of an application and not on helping the applicant rewrite the application. But what if you desire to provide some information or tips to the applicant? The Additional Comments to Applicant box is designed just for that purpose.
My emphasis added. In case this isn't clear enough, the following can be taken in the context of the other guidance document comments about reviewing the scientific and technical merit.
Your comments in this box should not be about the scientific or technical merit of an application; do not factor into the final impact score; are not binding; and do not represent a consensus by the review panel. But this type of information may be useful to an applicant.
Clear. Right? The rest of the review is not about being helpful. Comments designed to be helpful to the applicant are not to contribute to the scientific and technical merit review.
Now the comment also asked this:
What fraction of reviewers do you think understand it like you say?
I haven't the foggiest idea. Obviously I think that there is no way anyone who is paying the slightest bit of attention could fail to grasp these simple assertions. And I think that probably, if challenged, the vast majority of reviewers would at least ruefully admit that they understand that helping the applicant is not the job.
But we are mostly professors and academics who have a pronounced native or professionally acquired desire to help people out. As I've said repeatedly on this blog, the vast majority of grant applications have at least something to like about them. And if academic scientists get a little tinge of "gee that sounds interesting", their next instinct is usually "how would I make this better". It's default behavior, in my opinion.
So of course SROs are fighting an uphill battle to keep reviewers focused on what the task is supposed to be.
DataHound requested information on submissions and awards for the baby MIRA program from NIGMS. His first post noted what he considered to be a surprising number of applications rejected prior to review. The second post identifies what appears to be a disparity in success for applicants who identify as Asian* compared with those who identify white.
The differences between the White and Asian results are striking. The difference between the success rates (33.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.
There was also a potential sign of a disparity for applicants that identify as female versus male.
Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%
Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%
Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.
Same old, same old. Right? No matter what aspect of the NIH grant award we are talking about, men and white people always do better than women and non-white people.
The man-bites-dog part of the tale involves what NIGMS published on their blog about this.
Basson, Preuss and Lorsch report on the Feedback Loop blog entry dated 9/30/2016 that:
One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award
We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution.
Hard to reconcile with DataHound's report which comes from data requested under FOIA, so I presume it is accurate. Oh, and despite small numbers of "Others"* DataHound also noted:
The differences between the White and Other category results are less pronounced but also favored White applicants. The difference between the success rates (33.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) not statistically significant with p = 0.14 although the trend favors White applicants.
Not sure how NIGMS will choose to weasel out of being caught in a functional falsehood. Perhaps "did not observe" means "we took a cursory look and decided it was close enough for government work". Perhaps they are relying on the fact that the gender effects were not statistically significant, as DataHound noted. Women PIs were 19 out of 82 (23.2%) of the funded and 63/218 (28.9%) of the reviewed-but-rejected apps. This is not the way DataHound calculated success rate, I believe, but because by chance there were 63 female apps reviewed-but-rejected and 63 male apps awarded funding the math works out the same.
There appears to be no excuse whatever for the NIGMS team missing the disparity for Asian PIs.
The probability of administrative rejection really requires some investigation on the part of NIGMS. Because this would appear to be a huge miscommunication, even if we do not know where to place the blame for the breakdown. If I were NIGMS honchodom, I'd be moving mountains to make sure that POs were communicating the goals of various FOA fairly and equivalently to every PI who contacted them.
*A small number of applications for this program (403 were submitted, per DataHound's first post) means that there were insufficient numbers of applicants from other racial/ethnic categories to get much in the way of specific numbers. The NIH has rules (or possibly these are general FOIA rules) about reporting on cells that contain too few PIs...something about being able to identify them too directly.
WOW. This comment from dsks absolutely nails it to the wall.
The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.
Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.
Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).
NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.
I have never seen this so clearly, so thanks to dsks for expressing it.
— Drug Monkey (@drugmonkeyblog) September 20, 2016
The R21 Mechanism is called the Exploratory/Developmental mechanism. Says so right in the title.
NIH Exploratory/Developmental Research Grant Program ( Parent R21)
In the real world of NIH grant review, however, the "Developmental" part is entirely ignored in most cases. If you want a more accurate title, it should be:
NIH High Risk / High Reward Research Grant Program ( Parent R21)
This is what reviwers favor in my experiences sitting on panels and occasionally submitting an R21 app. Mine are usually more along the lines of developing a new line of research that I think is important rather than being truly "high risk/high reward".
And, as we all know, the R01 application (5 years, full modular at $250K per annum direct costs if you please) absolutely requires a ton of highly specific Preliminary Data.
So how are you supposed to Develop an idea into this highly specific Preliminary Data? Well, there's the R21, right? Says right in the title that it is Developmental.
But....it doesn't work in practice.
So the R01 is an alternative. After all it is the most flexible mechanism. You could submit an R01 for $25K direct costs for one year. You'd be nuts, but you could. Actually you could submit an R03 or R21 for one $25K module too, but with the R01 you would then have the option to put in a competitive renewal to continue the project along.
The only thing stopping this from being a thing is the study section culture that won't accept it. Me, I see a lot of advantages to using shorter (and likely smaller) R01 proposals to develop a new line of work. It is less risky than a 5 year R01, for those that focus on risk/$. It has an obvious path of continuation as a genuinely Developmental attempt. It is more flexible in scope and timing- perhaps what you really need is $100K per year for 3 years (like the old R21) for your particular type of research or job type. It doesn't come laden with quite the same "high risk, high reward" approach to R21 review that biases for flash over solid workmanlike substance.
The only way I see this working is to try it. Repeatedly. Settle in for the long haul. Craft your Specific Aims opening to explain why you are taking this approach. Take the Future Directions blurb and make it really sparkle. Think about using milestones and decision points to convince the reviewers you will cut this off at the end if it isn't turning out to be that productive. Show why your particular science, job category, institute or resources match up to this idea.
Or you could always just shout aimlessly into the ether of social media.
New thing I learned is that you can check on your continuous submission* status via the Personal Profile tab on Commons. It lists this by each Fiscal Year and gives the range of dates.
It even lists all of your study section participations. In case you don't keep track of that but have a need to use it.
I have been made aware of an apparent variation from the rules recently (6 study sections in an 18 mo interval). Anyone else ever heard of such a thing?
I've used continuous submission only a handful of times, to my recollection. TBH I've gone for long intervals of eligibility not realizing I was eligible because this policy has a long forward tail compared to when you qualify with 6 services / 18 mo.
How about you, Readers? Are you a big user of this privilege? Does it help you out or not so much? Do you never remember you are actually eligible?
*As a reminder, continuous submission isn't really continual. You have to get them in by Aug 15, Dec 15 and Apr 15 for the respective Cycles.
NOT-OD-16-131 indicates the projected salary changes for postdoctoral fellows supported under NRSA awards.
As anticipated, the first two years were elevated to meet the third year of the prior scale (plus a bit) with a much flatter line across the first three years of postdoctoral experience.
What think you o postdocs and PIs? Is this a fair* response to the Obama overtime rules?
Will we see** institutions (or PIs) where they just extend that shallow slope out for Years 3-7+?
h/t Odyssey and correction of my initial misread from @neuroecology
*As a reminder, $47,484 in 2016 dollars equals $39,715 in 2006 dollars, $30,909 in 1996 dollars and $21,590 in 1986 dollars. Also, the NRSA Yr 0 for postdocs was $20,292 for FY1997 and $36,996 for FY2006.
**I bet yes***.
***Will this be the same old jerks that already flatlined postdoc salaries? or will PIs who used to apply yearly bumps now be in a position where they just flatline since year 1 has increased so much?
I pointed out some time ago that the full modular R01 grant from the NIH doesn't actually pay for itself.
In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget. Trainees on individual fellowships or training grants, undergrads working for free or work study discount, cross pollination with other grants in the lab (which often leads to whinging like your comment), pilot awards for small bits, faculty hard money time...all of these sources of extra effort are frequently poured into a one-R01 project. I think they are, in essence, necessary.
I had some additional thoughts on this recently.
It's getting worse.
Look, it has always been the case that reviewers want to see more in a grant proposal. More controls, usually. Extra groups to really nail down the full breadth of...whatever it is that you are studying. This really cool other line of converging evidence... anything is possible.
All I can reflect is my own experience in getting my proposals reviewed and in reviewing proposals that are somewhat in the same subfields.
What I see is a continuing spiral of both PI offerings and of reviewer demands.
It's inevitable, really. If you see a proposal chock full of nuts that maybe doesn't quite get over the line of funding because of whatever reason, how can you give a fundable score to a very awesome and tight proposal that is more limited?
Conversely, in the effort to put your best foot forward you, as applicant, are increasingly motivated to throw every possible tool at your disposal into the proposal, hoping to wow the reviewers into submission.
I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct". Years ago people used to crab about "overambitious" proposals but I can't say I've heard that in forever. In this day and age of tight NIH paylines, the promises of doing it all in one R01 full-modular 5 year interval are escalating.
These grants set a tone, btw. I'm here to tell you that I've seen subfield related proposals that do seem feasible, money-wise, get nailed because they are too limited in scope. In some cases there is enough study-section continuity involved for me to be certain that this is due to reviewer contamination from the aforementioned chock-full-o-nuts impossible proposals. Yes, some of this is due to SABV but not all of it. It ranges from "why you no include more co-investigators?" (a subtle spread-the-wealth knock on big labs? maybe) to "You really need to add X, Y and Z to be convincing" (mkay but... $250K dude) to "waaah, I just want to see more" (even though they don't really have a reason to list).
Maybe this is just me being stuck in the rut I was trained in. In my formative years, grant review seemed to expect you would propose a set of studies that you could actually accomplish within the time frame and budget proposed. I seem to remember study section members curbing each other with "Dude, the PI can't fit all that stuff into one proposal, back off.". I used to see revisions get improved scores when the PI stripped a bloated proposal down to a minimalist streamlined version.
Maybe we are just experiencing a meaningless sea change in grant review to where we propose the sky and nobody cares on competing renewal if we managed to accomplish all of that stuff.
superkash started it:
— thomas kash (@superkash) May 27, 2016
and amplified it:
— thomas kash (@superkash) May 27, 2016
there was a bit of chatter and then eventually AListScientist asserted:
— A+ Scientist (@AListScientist) May 27, 2016
I, as well as several other colleagues who review grants, have noticed a seemingly sharp uptick in the number of applications coming in from PIs who are more "transitioning" than "transitioned". PIs whose job titles might be something other than "Assistant Professor" and ones who are still in or around the same laboratory or research group in which they have done a big chunk of postdoctoral work. In extreme cases the PI might still be titled "Postdoc" or have trained in the same place essentially since graduate school!
Readers of this blog might conclude that this trend, which I've been noticing for at least the past 3-4 rounds, delights me. And to the extent that it represents a recognition of the problems with junior scientist's making the career transition to independence this does appear a positive step. To the extent that it opens up artificial barriers blocking the next generation of scientists- great.
The slightly more cynical view expressed by colleagues and, admittedly, myself is that this trend has been motivated by IC Program behavior both in capping per-PI award levels and in promoting grant success for New Investigators. In other words that the well-established PIs with very large research groups are thinking that grants for which they would otherwise be the PI will now be more successful with some convenient patsy long-term postdoc at the helm. The science, however, is just the same old stuff of the established group and PI.
I surmise that the tweeting of @superkash was related to this conundrum. I would suggest to newcomers to the NIH system that these issues are still alive and well and contribute in various ways to grant review outcome. We see very clearly in various grant/career related discussion on twitter, this blog and commentary to various NIH outlets that peer scientists have strong ideas on categories of PI that deserve or don't deserve funding. For example in the recent version on CSR's Peer Review website, comments suggest we should keel the yuge labs, keel the ESIs, keel the riffraff noobs and save the politically disconnected. The point being that peer reviewers come with biases for and against the PI(s) (and to lesser extent the other investigators).
The fact that the Investigator criterion is one of the five biggies (and there is no official suggestion that it is of any lesser importance than Approach, Significance or Innovation) permits (and one might say requires) the reviwers to exercise these biases. It also shows that AListScientist's apparent belief that Investigators are not to be evaluated because the applicant University has certified them is incorrect.
The official CSR Guidance on review of the Investigator criterion is posed as a series of questions:
Are the PD/PIs, collaborators, and other researchers well suited to the project? If Early Stage Investigators or New Investigators, or in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance and organizational structure appropriate for the project?
Right there you can see where the independence of the PI might be of interest to the reviewer.
"have they demonstrated an ongoing record of accomplishments"
We what to know what they personally have accomplished. Or caused to have accomplished if you want to natter about PIs not really doing hands on science. The point is, can this PI make the proposed studies happen? Is there evidence that she has done so before? Or is there merely evidence that he has existed as glorified hands in the PIs lab up to this point in time?
"are their leadership approach, governance and organizational structure appropriate for the project?"
Can they lead? Can they boot all the tails hard enough to get this project accomplished? I say that this is an entirely appropriate consideration.
I hope you do as well and I would be interested to hear a counter argument.
I suspect that most of the pushback on this comes from the position of thinking about the Research Assistant Professor who IS good enough. Who HAS operated more or less independently and led projects in the SuperLab.
The question for grant review is, how are we to know? From the record and application in front of us.
I am unable to leave this part off: If you are a RAP or heading to be one as a mid to late stage postdoc, the exhortation to you is to lay down evidence of your independence as best you are able. Ask Associate Professor peers that you know about what possible steps you can take to enhance the optics of you-as-PI on this.
The Aims shall be Three, and Three shall be the number of Aims.
Four shalt there not be, nor Two except as they precede the Third Aim.
Five is right out.