— dr. melina uncapher (@neuromelina) August 26, 2016
If your NIH grant proposal reads like this, it is not going to do well.
— dr. melina uncapher (@neuromelina) August 26, 2016
If your NIH grant proposal reads like this, it is not going to do well.
A question and complaint from commenter musclestumbler on a prior thread introduces the issue.
So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it's obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.
I personally go after R15 and R03 mechanisms because that's all that can be reasonably obtained at my university. ... Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue...
This isn't simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms - a "little kids table" if you will - but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.
The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!
I noted the CSR webpage on study section selection says:
Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.
It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.
I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting "appropriate expertise" when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.
But ultimately we come to the "mercy of the court" aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what "has" to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.
I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.
The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don't just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.
Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.
Sometimes you have to experiment a little with the NIH system. You'd be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.
New thing I learned is that you can check on your continuous submission* status via the Personal Profile tab on Commons. It lists this by each Fiscal Year and gives the range of dates.
It even lists all of your study section participations. In case you don't keep track of that but have a need to use it.
I have been made aware of an apparent variation from the rules recently (6 study sections in an 18 mo interval). Anyone else ever heard of such a thing?
I've used continuous submission only a handful of times, to my recollection. TBH I've gone for long intervals of eligibility not realizing I was eligible because this policy has a long forward tail compared to when you qualify with 6 services / 18 mo.
How about you, Readers? Are you a big user of this privilege? Does it help you out or not so much? Do you never remember you are actually eligible?
*As a reminder, continuous submission isn't really continual. You have to get them in by Aug 15, Dec 15 and Apr 15 for the respective Cycles.
NOT-OD-16-131 indicates the projected salary changes for postdoctoral fellows supported under NRSA awards.
As anticipated, the first two years were elevated to meet the third year of the prior scale (plus a bit) with a much flatter line across the first three years of postdoctoral experience.
What think you o postdocs and PIs? Is this a fair* response to the Obama overtime rules?
Will we see** institutions (or PIs) where they just extend that shallow slope out for Years 3-7+?
h/t Odyssey and correction of my initial misread from @neuroecology
*As a reminder, $47,484 in 2016 dollars equals $39,715 in 2006 dollars, $30,909 in 1996 dollars and $21,590 in 1986 dollars. Also, the NRSA Yr 0 for postdocs was $20,292 for FY1997 and $36,996 for FY2006.
**I bet yes***.
***Will this be the same old jerks that already flatlined postdoc salaries? or will PIs who used to apply yearly bumps now be in a position where they just flatline since year 1 has increased so much?
Inspiring meeting with students at AMPATH, Eldoret, Kenya pic.twitter.com/ybdlKGfKvA
— Francis S. Collins (@NIHDirector) August 4, 2016
...a picture he took with the 0.2%.
From the Manhattan branch of the US Attorney's Office charged:
The United States’ Complaint-In-Intervention (the “Complaint”) alleges that from July 1, 2003, through June 30, 2015, COLUMBIA impermissibly applied its “on-campus” indirect cost rate – instead of the much lower “off-campus” indirect cost rate – when seeking federal reimbursement for 423 NIH grants where the research was primarily performed at off-campus facilities owned and operated by the State of New York and New York City. The Complaint further alleges that COLUMBIA failed to disclose to NIH that it did not own or operate these facilities and that COLUMBIA did not pay for use of the space for most of the relevant period.
...and Columbia University admitted:
COLUMBIA has admitted that it applied the on-campus indirect cost rate to the 423 NIH grants even though the research was primarily performed in space not owned or operated by Columbia, and that it submitted to NIH certified reports that used the on-campus indirect cost rate to calculate the indirect cost amounts claimed by the university.
Ah, those tricky accountants.
Oh. cool. paging down on the complaint we get some specifics:
From July 1, 2003, through June 30, 2015, COLUMBIA’s On-Campus F&A Rate was approximately 61 percent, its Off-Campus F&A Rate was 26 percent, and its Modified Off-Campus F&A Rate was 29.4 percent. The Modified Off-Campus F&A Rate was to be applied to research conducted off-campus but within a certain proximity of the COLUMBIA campus.
Y'know. When I first read this my first thought was that I know some Columbia folks that work off campus at.... oh shit. It's them. It's the drug abuse folks.
COLUMBIA has a collaborative relationship with the New York State Psychiatric Institute (“NYSPI”), a clinical research facility administered by the New York State Office of Mental Health. COLUMBIA faculty perform research in two off-campus buildings owned by the State of New York and operated by NYSPI (the “NYSPI Buildings”). COLUMBIA faculty also perform research in another off-campus building owned and operated by the City of New York (the “City Building”).
For most of the relevant period, COLUMBIA did not pay the State of New York for use of the NYSPI Buildings, and therefore did not incur indirect “facilities-related” costs with respect to the medical research performed in these buildings. Similarly, COLUMBIA did not pay the City of New York for use of the City Building.
Presumably my friends who are the PIs on these grants had no idea. I have no idea what my institution actually charges the NIH as overhead on my grants, all I look at is my direct cost expenditures and balances. But still, sorry to see that it was their research grants that were involved. If nothing else it means that NIDA [Update: I found some details and 22/423 total grants were driect listings from NIDA, although there are what look like subaward identifiers (that may or may not involve other NIDA grants.)] was the entity being ripped off. The settlement was for $9.5 million. It doesn't say how much of this is direct recovery for the fraud and how much is court costs or punishment.
oh, wait. Damn. This looks bad.
COLUMBIA did not state on the applications for the NIH Grants that the research would be primarily performed off-campus, as required. Instead, Columbia frequently included the main address for the College of Physicians & Surgeons in the section of the application that was supposed to list the primary performance location. Even where the NYSPI Buildings or the City Building were listed in that section of the grant application, or mentioned elsewhere in the application, COLUMBIA failed to disclose that these buildings were not owned and operated by the university.
Starting in fiscal year 2009, in lieu of paying rent for use of one of the NYSPI Buildings, the Department of Neuroscience paid NYSPI a portion of the inflated indirect cost recoveries it received from NIH for research projects performed in that building.
This smells a lot more like highly intentional fraud and less like a mistake that someone should have caught. In the pre-award review of the grant, if you ask me. Especially when CU was clearly negotiating the rental arrangements with NYSPI. Someone pretty high up in the office of grants and contracts had to be doing this whole charade intentionally and with planning. There are a handful of other regulatory issues that I don't want to get into which very likely pointed a spotlight on the "performance location" too. This had to be intentional.
Turns out that this was a whistleblower case.
In connection with the filing of the lawsuit and settlement, the Government joined a private whistleblower lawsuit that had previously been filed under seal pursuant to the False Claims Act.
Good for that brave person for bringing this to light.
Final thought: I bet you that Columbia University is not the only NIH funded University out there that pulls some shenanigans like this. Now, you would think that there would be some sort of broad and universal alert sent to the Signing Officials of each University that has an on- and off-campus rate. Telling them to get their act together on this or any future investigation that busts them will automatically have the fines tripled. But going by at least one narrow similar area that I've followed over the past couple of decades (the anti-lobbying / grant writing thing) apparently this does not happen. So keep your eyes peeled for the next decade. I bet there will be more of these and that in each case it will again be figured out only via whistleblower.
I pointed out some time ago that the full modular R01 grant from the NIH doesn't actually pay for itself.
In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget. Trainees on individual fellowships or training grants, undergrads working for free or work study discount, cross pollination with other grants in the lab (which often leads to whinging like your comment), pilot awards for small bits, faculty hard money time...all of these sources of extra effort are frequently poured into a one-R01 project. I think they are, in essence, necessary.
I had some additional thoughts on this recently.
It's getting worse.
Look, it has always been the case that reviewers want to see more in a grant proposal. More controls, usually. Extra groups to really nail down the full breadth of...whatever it is that you are studying. This really cool other line of converging evidence... anything is possible.
All I can reflect is my own experience in getting my proposals reviewed and in reviewing proposals that are somewhat in the same subfields.
What I see is a continuing spiral of both PI offerings and of reviewer demands.
It's inevitable, really. If you see a proposal chock full of nuts that maybe doesn't quite get over the line of funding because of whatever reason, how can you give a fundable score to a very awesome and tight proposal that is more limited?
Conversely, in the effort to put your best foot forward you, as applicant, are increasingly motivated to throw every possible tool at your disposal into the proposal, hoping to wow the reviewers into submission.
I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct". Years ago people used to crab about "overambitious" proposals but I can't say I've heard that in forever. In this day and age of tight NIH paylines, the promises of doing it all in one R01 full-modular 5 year interval are escalating.
These grants set a tone, btw. I'm here to tell you that I've seen subfield related proposals that do seem feasible, money-wise, get nailed because they are too limited in scope. In some cases there is enough study-section continuity involved for me to be certain that this is due to reviewer contamination from the aforementioned chock-full-o-nuts impossible proposals. Yes, some of this is due to SABV but not all of it. It ranges from "why you no include more co-investigators?" (a subtle spread-the-wealth knock on big labs? maybe) to "You really need to add X, Y and Z to be convincing" (mkay but... $250K dude) to "waaah, I just want to see more" (even though they don't really have a reason to list).
Maybe this is just me being stuck in the rut I was trained in. In my formative years, grant review seemed to expect you would propose a set of studies that you could actually accomplish within the time frame and budget proposed. I seem to remember study section members curbing each other with "Dude, the PI can't fit all that stuff into one proposal, back off.". I used to see revisions get improved scores when the PI stripped a bloated proposal down to a minimalist streamlined version.
Maybe we are just experiencing a meaningless sea change in grant review to where we propose the sky and nobody cares on competing renewal if we managed to accomplish all of that stuff.
The NIGMS continues its ongoing argument for funding more labs with ever decreasing amounts of grant funding in a new Feedback Loop post.
This one focuses, yet again, on "productivity" as assessed by publication counts and (this time) citations of those publications. It is, as always, significantly flawed by ignoring the effects of Glamour publications. I've done that before and it is starting to bore me. In short, you cannot compare apples to oranges because of the immense difference in the cost of generating your average Nature paper versus a Brain Research paper. And citations don't help because getting into a Glam journal does not mean your paper will get any particular number of citations. Furthermore, there is very little chance that papers that cost 10 or 20 times more will generate ten or twenty times the citations, on average, given the skew in citation distributions and the fact that Glam journals are only hitting means in the 30-40 range. Finally, their "efficiency" measures completely ignore the tremendous inefficiencies of interrupted funding, which is a reality under the current system and also not necessarily fixed with their spread-the-wealth schemes.
The real issue of the day is the opinion of the fans of NIGMS's "conclusion*", which reads:
Overall, however, the data suggest that supporting a greater number of investigators at moderate funding levels is a better investment strategy than concentrating high amounts of funding in a smaller number of researchers.
The Sally Rockey blog entry on "mythbusting" is relevant here. As of FY2009 about 72% of NIH funded investigators had one RPG. Another 20% had two and maybe 5% had three.
The NIGMS data analyses are big on fitting productivity lines to about the single R01 level of direct costs (~$200K per year) and showing how the productivity/cost drops off as the grant funding increases. Take a good look at the most recent analysis. Linear productivity up to $300K direct costs with the 75%ile sustained all the way to $500K. The famous original 2010 analysis by Jeremy Berg at NIGMS is pretty similar in the sense that you don't get much change in the line fit to mean publications until you get to the $600-$700K direct costs range.
There is a critical point in lining up these two bits of information which is that the NIGMS policy intent is not supported by their analysis and it can't be. One or two RPG level from Rockey's post should be interpreted in full modular R01 terms ($250K direct, usually cut to $200K, $225K direct and in NIGMS' case to 4 years by default) with a little bit of float upwards for the rare cases. Consequently, it is obvious that most NIH awardees operate in the ~$200-250K part of NIGMS' dataset. Another 20% operate in the $400-$500K direct range. In other words, well within the linear part of the productivity/cost curve.
Mean publications as represented by the 2010 Berg analysis are increasing linearly well up to the three to four grant level of $750K direct costs.
In either case, the "inefficient" grant levels are being obtained by a vanishingly small number of investigators.
Fine, screw them, right?
Sure....but this does nothing to address either the stated goal of NIGMS in hedging their bets across many labs or the goal of the unfunded, i.e., to increase their chances substantially.
A recent Mike Lauer Blog post showed that about a third of those PI's who seek RPG funding over a rolling 5 year interval achieve funding. Obviously if you take all the multi-grant PIs and cut them down to one tomorrow, you'd be able to bump funded investigators up by 15-20%, assuming the FY2009 numbers are relatively good still**. It isn't precise because if you limit the big guys to one award then these are going to drift up to $499K direct at a minimum and a lot more will have special permission to crest the $500K threshold.
There will be a temporary sigh of relief and some folks will get funded at 26%ile. Sure. And then there will be even more PIs in the game seeking funding and it will continue to be a dogfight to retain that single grant award. And the next round of newbies will face the same steep odds of entry. Maybe even steeper.
So the ONLY way for NIGMS' plan to work is to cut per-PI awards way, way down into the front part of their productivity curves. Well below the point of inflection($300-500K or even $750K depending on measure) where papers-per-grant dollar drops off the linear trend. Even the lowest estimate of $300K direct is more than one full-modular grant. It will take a limit substantially below this level*** to improve perceptions of funding ease or to significantly increase the number of funded labs.
Which makes their argument based on those trends a lie, if they truly intend it to support their "better investment strategy". Changing the number of investigators they support in any fundamental way means limiting per-PI awards to the current full modular limit (with typical reductions) at the least, and very likely substantially below this level to produce anything like a phase change.
That's fine if they want to just assert "we think everyone should only have X amount of direct costs" but it is not so fine if they argue that they have some objective, productivity-based data analysis to support their plans. Because it does not.
*This is actually their long standing assertion that all of these seemingly objective analyses are designed to support.
**should be ballpark, given the way Program has been preserving unfunded labs at the expense of extra awards to funded labs these days.
***I think many people arguing in favor of the NIGMS type of "small grants for all" strategy operate from the position that they personally deserve funding. Furthermore that some grant award of full modular level or slightly below is sufficient for them. Any dishonest throwaway nod to other types of research that are more expensive (as NIGMS did "We recognize that some science is inherently more expensive, for example because of the costs associated with human and animal subjects.") is not really meant or considered. This is somewhat narrow and self-involved. Try assuming that all of the two-granters in Rockey's distribution really need that amount of funding (remember the erosion of purchasing power?) and that puts it at more like 92% of awardees that enjoy basic funding at present. Therefore the squeeze should be proportional. Maybe the bench jockeys should be limited to $100K or even $50K in this scenario? Doesn't seem so attractive if you consider taking the same proportional hit, does it?
That extensive quote from a black PI who had participated in the ECR program is sticking with me.
Insider status isn't binary, of course. It is very fluid within the grant-funded science game. There are various spectra along multiple dimensions.
But make no mistake it is real. And Insider status is advantageous. It can be make-or-break crucial to a career at many stages.
I'm thinking about the benefits of being a full reviewer with occasional/repeated ad hoc status or full membership.
One of those benefits is that other reviewers in SEPs or closely related panels are less likely to mess with you.
It isn't any sort of quid pro quo guarantee. Of course not. But I guarantee that a reviewer who thinks this PI might be reviewing her own proposal in the near future has a bias. A review cant. An alerting response. Whatever.
It is different. And, I would submit, generally to the favor of the applicant that possesses this Mutually Assured Destruction power.
The Ginther finding arose from a thousand cuts, I argue. This is possibly one of them.
If I stroke out today it is all the fault of MorganPhD.
Jeffery Mervis continues with coverage of the NIH review situation as it pertains to the disparity for African-American PIs identified in 2011 (that's five years and fifteen funding rounds ago, folks) by the Ginther report.
The main focus for this week is on the Early Career Reviewer program. As you will recall, this blog has advocated continually and consistently for the participation of more junior PIs on grant review panels.
The ECR program was created explicitly to deal with underrepresented groups. However, what happened is that there was immediate opposition which insisted that the ECR program had to be open to all junior faculty/applicants, regardless of representation in the NIH game.
One-quarter of researchers in ECR's first cohort were from minority groups, he notes. “But as we've gone along, there are fewer underrepresented minorities coming into the pool.”
Minorities comprise only 13% of the roughly 5100 researchers accepted into the program (6% African-American and 7% Hispanic), a percentage that roughly matches their current representation on study sections.
Ok, but how have the ECR participants fared?
[Nakamura] said ECR alumni have been more than twice as successful as the typical new investigator in winning an R01 grant.
NIIIIIICE. Except they didn't flog the data as hard as one might hope. This is against the entire NI (or ESI?) population.
The pool of successful ECR alumni includes those who revised their application, sometimes more than once, after getting feedback on a declined proposal. That extra step greatly improves the odds of winning a grant. In contrast, the researchers in the comparison group hadn't gone through the resubmission process.
Not sure if this really means "hadn't" or "hadn't necessarily". The latter makes more sense if they are just comparing to aggregate stats. CSR data miners would have had to work harder to get this isolated to those who hadn't revised yet, and I suspect if they had gone to that effort, they could have presented the ESIs who had at least one revision under their belt. But what about the underrepresented group of PIs that are the focus of all this effort?
It's also hard to interpret the fact that 18% of the successful ECRs were underrepresented minorities because NIH did not report the fraction of minorities among ECR alumni applicants. So it is not clear whether African-Americans participating in the program did any better than the cohort as a whole—suggesting that the program might begin to close the racial gap—or better than a comparable group of minority scientists who were not ECR alumni.
SERIOUSLY Richard Nakamura? You just didn't happen to request your data miners do the most important analysis? How is this even possible?
How on earth can you not be keeping track of applicants to ECR, direct requests from SROs, response rate and subsequent grant and reviewing behavior? It is almost as if you want to look like you are doing something but have no interest in it being informative or in generating actionable intelligence.
Moving along, we get a further insight into Richard Nakamura and his position in this situation.
Nakamura worries that asking minority scientists to play a bigger role in NIH's grantsmaking process could distract them from building up their lab, finding stable funding, and earning tenure. Serving on a study section, he says, means that “those individuals will have less time to write applications. So we need to strike the right balance.”
Paternalistic nonsense. The same thing that Scarpa tried to use to justify his purge of Assistant Professors from study sections. My answer is the same. Let them decide. For themselves. Assistant Professors and underrepresented PIs can decide for themselves if they are ready and able to take up a review opportunity when asked. Don't decide, paternalistically, that you know best and will refrain from asking for their own good, Director Nakamura!
Fascinatingly, Mervis secured an opinion that echoes this. So Nakamura will surely be reading it:
Riggs, the only African-American in his department, thinks the program is too brief to help minority scientists truly become part of the mainstream, and may even exacerbate their sense of being marginalized.
“After I sat on the panel, I realized there was a real network that exists, and I wasn't part of that network,” he says. “My comments as a reviewer weren't taken as seriously. And the people who serve on these panels get really nervous about having people … that they don't know, or who they think are not qualified, or who are not part of the establishment.”
If NIH “wants this to be real,” Riggs suggests having early-career researchers “serve as an ECR and then call them back in 2 years and have them serve a full cycle. I would have loved to do that.”
The person in the best position to decide what is good or bad for his or her career is the investigator themself.
This comment also speaks to my objection to the ECR as a baby-intro version of peer review. It isn't necessary. I first participated on study section in my Asst Prof years as a regular ad hoc with a load of about six grants, iirc. Might have been 2 less than the experienced folks had but it was not a baby-trainee experience in the least. I was treated as a new reviewer, but that was about the extent of it. I thought I was taken seriously and did not feel patronized.
Toni Scarpa to leave CSR