Reminder: The purpose of NIH grant review is not to fix the application

(by drugmonkey) Oct 07 2016

A question on my prior post wanted to know if my assertions were official and written or not.

DM, how do you know that this is the case? I mean, I don't doubt that this is the case, but is it explicitly articulated somewhere?

This was in response to the following statements from me.

They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

The fact that we are not supposed to so much as mention the "f-word", i.e., "funding", has been communicated verbally by every single SRO I have ever reviewed under. They tend to do this at the opening of the meeting and sometimes in the pre-meeting introductory phone call. Many SROs of my acquaintance also spit this out like a reflex during the course of the meeting if they ever hear a reviewer mention it.

The rest of my statements are best evaluated as I wrote them. I.e., by looking at the the NIH review guidance material to see what the reviewers are instructed to do. There is a complete absence of any statements suggesting the job is to help out the applicant. There is a complete absence of any statement suggesting the job is to decide what to fund. The task is described assertively to:

Make recommendations concerning the scientific and technical merit of applications under review, in the form of final written comments and numerical scores.

As far as more positive assertions on the "fixing applications" front go, the most direct thing I can find at present is in the instruction on the "Additional Comments to Applicant" section of the critique template (take a look at that template if you've never reviwed). This document says:

As an NIH reviewer, your written critique should focus on evaluating the scientific and technical merit of an application and not on helping the applicant rewrite the application. But what if you desire to provide some information or tips to the applicant? The Additional Comments to Applicant box is designed just for that purpose.

My emphasis added. In case this isn't clear enough, the following can be taken in the context of the other guidance document comments about reviewing the scientific and technical merit.

Your comments in this box should not be about the scientific or technical merit of an application; do not factor into the final impact score; are not binding; and do not represent a consensus by the review panel. But this type of information may be useful to an applicant.

Clear. Right? The rest of the review is not about being helpful. Comments designed to be helpful to the applicant are not to contribute to the scientific and technical merit review.

Now the comment also asked this:

What fraction of reviewers do you think understand it like you say?

I haven't the foggiest idea. Obviously I think that there is no way anyone who is paying the slightest bit of attention could fail to grasp these simple assertions. And I think that probably, if challenged, the vast majority of reviewers would at least ruefully admit that they understand that helping the applicant is not the job.

But we are mostly professors and academics who have a pronounced native or professionally acquired desire to help people out. As I've said repeatedly on this blog, the vast majority of grant applications have at least something to like about them. And if academic scientists get a little tinge of "gee that sounds interesting", their next instinct is usually "how would I make this better". It's default behavior, in my opinion.

So of course SROs are fighting an uphill battle to keep reviewers focused on what the task is supposed to be.

10 responses so far

Reminder: The purpose of NIH grant review is not to help out the applicant with kindness

(by drugmonkey) Oct 06 2016

The reviewers of NIH grant applications are charged with helping the Program staff of the relevant Institute or Center of the NIH decide on relative merits of applications as they, the Program staff, consider which ones to select for funding.


They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

If they can also be kind, help the PI improve her grant for next time, help her improve her grantsmithing in general and/or in passing save someone's career, hey great. Bonus. Perfectly acceptable outcome of the process.

But if the desire to accomplish any of these things compromise the assessment of merit** in a way that serves the needs of the Program staff**, that reviewer is screwing up.

*Maybe start a blog if this is your compulsion? I've heard that works for some people who have such urges.

**"merit" in this context is not necessarily what any given reviewer happens to think it is a priori, either. For example, there could be a highly targeted funding opportunity with stated goals that a given reviewer doesn't really agree with. IMV, that reviewer is screwing up if she substitutes her goals for the goals expressed by the I or C in the funding opportunity announcement.

14 responses so far

NIH always jukes the stats in their favor

(by drugmonkey) Oct 04 2016

DataHound requested information on submissions and awards for the baby MIRA program from NIGMS. His first post noted what he considered to be a surprising number of applications rejected prior to review. The second post identifies what appears to be a disparity in success for applicants who identify as Asian* compared with those who identify white.

The differences between the White and Asian results are striking. The difference between the success rates (33.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.

There was also a potential sign of a disparity for applicants that identify as female versus male.

Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%

Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%

Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.

Same old, same old. Right? No matter what aspect of the NIH grant award we are talking about, men and white people always do better than women and non-white people.

The man-bites-dog part of the tale involves what NIGMS published on their blog about this.

Basson, Preuss and Lorsch report on the Feedback Loop blog entry dated 9/30/2016 that:

One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award
We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution.

Hard to reconcile with DataHound's report which comes from data requested under FOIA, so I presume it is accurate. Oh, and despite small numbers of "Others"* DataHound also noted:

The differences between the White and Other category results are less pronounced but also favored White applicants. The difference between the success rates (33.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) not statistically significant with p = 0.14 although the trend favors White applicants.

Not sure how NIGMS will choose to weasel out of being caught in a functional falsehood. Perhaps "did not observe" means "we took a cursory look and decided it was close enough for government work". Perhaps they are relying on the fact that the gender effects were not statistically significant, as DataHound noted. Women PIs were 19 out of 82 (23.2%) of the funded and 63/218 (28.9%) of the reviewed-but-rejected apps. This is not the way DataHound calculated success rate, I believe, but because by chance there were 63 female apps reviewed-but-rejected and 63 male apps awarded funding the math works out the same.

There appears to be no excuse whatever for the NIGMS team missing the disparity for Asian PIs.

The probability of administrative rejection really requires some investigation on the part of NIGMS. Because this would appear to be a huge miscommunication, even if we do not know where to place the blame for the breakdown. If I were NIGMS honchodom, I'd be moving mountains to make sure that POs were communicating the goals of various FOA fairly and equivalently to every PI who contacted them.

Related Reading.
*A small number of applications for this program (403 were submitted, per DataHound's first post) means that there were insufficient numbers of applicants from other racial/ethnic categories to get much in the way of specific numbers. The NIH has rules (or possibly these are general FOIA rules) about reporting on cells that contain too few PIs...something about being able to identify them too directly.

19 responses so far

Completely uncontroversial PI comment on paper writing

(by drugmonkey) Sep 29 2016


46 responses so far

Responding to Targeted NIH Grant Funding Opportunity Announcements

(by drugmonkey) Sep 29 2016

The NIH FOAs come in many flavors of specificity. Some, usually Program Announcements, are very broad and appear to permit a wide range of applications to fit within them. My favorite example of this is NIDA's "Neuroscience Research on Drug Abuse" PA.

They also come in highly specific varieties, generally as RFAs.

The targeted FOA is my topic for the day because they can be frustrating in the extreme. No matter how finely described for the type, these FOA are inevitably too broad to let each and every interested PI know exactly how to craft her application. Or, more importantly, whether to bother. There is always a scientific contact, a Program Officer, listed so the first thing to do is email or call this person. This can also be frustrating. Sometimes one gets great advice, sometimes it is perplexing.

As always, I can only offer up the way I look at these things.

As an applicant PI facing an FOA that seems vaguely of interest to me, I have several variables that are at play. First, despite the fact that Program may have written the FOA in a particular way, this doesn't mean that they really know what they want. The FOA language may be a committee result or it may just not have been thought that a highly specific type of proposal was necessary to satisfy what goals and motivations existed.

Second, even if they do know what they want in Programville, peer review is always the primary driver. If you can't escape triage it is highly unlikely that Program will fund your application, even if it fits their intent to a T. So as the applicant PI, I have to consider how peers are likely to interpret the FOA and how they are likely to apply it to my application. It is not impossible that the advice and perspective given to the prospective PI by the contact PO flies rather severely in the face of that PIs best estimate of what is likely to occur during peer review. This leaves a conundrum.

How to best navigate peer review and also serve up a proposal that is attractive to Program, in case they are looking to reach down out of the order of review for a proposal that matches what they want.

Finally, as I mention now and again there is an advocacy role for the PI when applying for NIH funding. It is part and parcel of the job of the PI to tell Program what they should be funding. By, of course, serving up such a brilliantly argued application that they see that your take on their FOA is the best take. Even if this may not have been what was their intent in the first place. This also, btw, applies to the study section members. Your job is in part to convince them, not to meet whatever their preconceptions or reading of the FOA might be.

Somehow, the PI has to stew all of these considerations together and come up with a plan for the best possible proposal. Unfortunately, you can miss the mark. Not because your application is necessarily weak or your work doesn't fit the FOA in some objective sense. Merely because you have decided to make choices, gambles and interpretations that have led you in a particular direction, which may very well be the "wrong" direction.

Most severely, you might be rejected without review. This can happen. If you do not meet the PO's idea of being within the necessary scope of what they would ever plan to fund, no matter the score, you could have your application prevented from being routed to the study section.

Alternately, you might get triaged by a panel that just doesn't see it your way. That wonders if you, the idiot PI, was reading the same FOA that they are. It happens.

Finally, you might get a good score and Program may decide to skip over it for lack of responsiveness to their intent. Or you may be in the grey zone and fail to get a pickup because other grants scoring below yours are deemed closer to what they want to fund.

My point for today is that I think this is necessary error in the system. It is not evidence of a wholesale problem with the NIH FOA approach if you shoot wide to the left. If you fail to really understand the intent of the FOA as written. Or if you come away from your initial chat with the PO with a misguided understanding. Or even if you run into the buzzsaw of a review panel that rebels against the FOA.

Personally, I think you just have to take your chances. Arrive at your best understanding of what the FOA intends and how the POs are going to interpret various proposals. Sure. And craft your application accordingly. But you have to realize that you may be missing the point entirely. You may fail to convince anyone of your brilliant take on the FOA's stated goals. This doesn't mean the system is broken.

So take your shots. Offer up your best interpretation on how to address the goals. And then bear down and find the next FOA and work on that. In case your first shot sails over the crossbar.

It always fascinates me how fairly wide-flung experiences with NIH funding coalesce around the same issue sometimes. This particular post was motivated by no less than three situations being brought to my attention in the past week. Different ICs, different FOA, different mechanisms and vastly different topics and IC intentions. But to me, the answers are the same.

12 responses so far

The NIH has shifted from being an investor in research to a consumer of research

(by drugmonkey) Sep 21 2016

WOW. This comment from dsks absolutely nails it to the wall.

The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.

Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.

Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).

NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.

I have never seen this so clearly, so thanks to dsks for expressing it.

38 responses so far

Repost: Keep the ball in play

(by drugmonkey) Sep 21 2016

This was originally posted 16 September, 2014.

We're at the point of the fiscal year where things can get really exciting. The NIH budget year ends Sept 30 and the various Institutes and Centers need to balance up their books. They have been funding grants throughout the year on the basis of the shifting sands of peer review with an attempt to use up all of their annual allocation on the best possible science.

Throughout the prior two Council rounds of the year, they have to necessarily be a bit conservative. After all, they don't know in the first Round if maybe they will have a whole bunch of stellar scores come in during the third Round. Some one-off funding opportunities are perhaps schedule for consideration only during the final Round. Etc.

Also, the amount of funding requested for each grant varies. So maybe they have a bunch of high scoring proposals that are all very inexpensive? Or maybe they have many in the early rounds of the year that are unusually large?

This means that come September, the ICs are sometimes sitting on unexpended funds and need to start picking up proposals that weren't originally slated to fund. Maybe it is a supplement, maybe it is a small mechanism like a R03 or R21. Maybe they will offer you 2 years of funding of an R01 proposed for 5. Maybe they will offer you half the budget you requested. Maybe they have all of a sudden discovered a brand new funding priority and the quickest way to hit the ground running is to pick something up with end-of-year funds.

Now obviously, you cannot game this out for yourself. There is no way to rush in a proposal at the end of the year (save for certain administrative supplements). There is no way for you to predict what your favorite IC is going to be doing in Sep- maybe they have exquisite prediction and always play it straight up by priority score right to the end, sticking within the lines of the Council rounds. And of course, you cannot assume lobbying some lowly PO for a pickup is going to work out for you.

There is one thing you can do, Dear Reader.

It is pretty simple. You cannot receive one of these end-of-year unexpected grant awards unless you have a proposal on the books and in play. That means, mostly, a score and not a triage outcome. It means, in a practical sense, that you had better have your JIT information all squared away because this can affect things. It means, so I hear, that this is FINALLY the time when your IC will quite explicitly look at overhead rates to see about total costs and screw over those evil bastiges at high overhead Universities that you keep ranting about on the internet. You can make sure you have not just an R01 hanging around but also a smaller mech like an R03 or R21.

It happens*. I know lots and lots of people who have received end-of-the-FY largesse that they were not expecting. Received this type of benefit myself. It happens because you have *tried* earlier in the year to get funding and have managed to get something sitting on the books, just waiting for the spotlight of attention to fall upon you.

So keep that ball in play, my friends. Keep submitting credible apps. Keep your Commons list topped off with scored apps.

*As we move into October, you can peruse SILK and RePORTER to see which proposals have a start date of Sep 30. Those are the end-of-year pickups.

h/t: some Reader who may or may not choose to self-identify 🙂

4 responses so far

Bring back the 2-3 year Developmental R01

(by drugmonkey) Sep 19 2016

The R21 Mechanism is called the Exploratory/Developmental mechanism. Says so right in the title.

NIH Exploratory/Developmental Research Grant Program ( Parent R21)

In the real world of NIH grant review, however, the "Developmental" part is entirely ignored in most cases. If you want a more accurate title, it should be:

NIH High Risk / High Reward Research Grant Program ( Parent R21)

This is what reviwers favor in my experiences sitting on panels and occasionally submitting an R21 app. Mine are usually more along the lines of developing a new line of research that I think is important rather than being truly "high risk/high reward".

And, as we all know, the R01 application (5 years, full modular at $250K per annum direct costs if you please) absolutely requires a ton of highly specific Preliminary Data.

So how are you supposed to Develop an idea into this highly specific Preliminary Data? Well, there's the R21, right? Says right in the title that it is Developmental. doesn't work in practice.

So the R01 is an alternative. After all it is the most flexible mechanism. You could submit an R01 for $25K direct costs for one year. You'd be nuts, but you could. Actually you could submit an R03 or R21 for one $25K module too, but with the R01 you would then have the option to put in a competitive renewal to continue the project along.

The only thing stopping this from being a thing is the study section culture that won't accept it. Me, I see a lot of advantages to using shorter (and likely smaller) R01 proposals to develop a new line of work. It is less risky than a 5 year R01, for those that focus on risk/$. It has an obvious path of continuation as a genuinely Developmental attempt. It is more flexible in scope and timing- perhaps what you really need is $100K per year for 3 years (like the old R21) for your particular type of research or job type. It doesn't come laden with quite the same "high risk, high reward" approach to R21 review that biases for flash over solid workmanlike substance.

The only way I see this working is to try it. Repeatedly. Settle in for the long haul. Craft your Specific Aims opening to explain why you are taking this approach. Take the Future Directions blurb and make it really sparkle. Think about using milestones and decision points to convince the reviewers you will cut this off at the end if it isn't turning out to be that productive. Show why your particular science, job category, institute or resources match up to this idea.

Or you could always just shout aimlessly into the ether of social media.

41 responses so far

Can you train resilience into grad students or postdocs?

(by drugmonkey) Sep 13 2016

As I've noted on these pages before, my sole detectable talent for this career is the ability to take a punch.

There are a lot of punches in academic science. A lot of rejection and the congratulations for a job well done are few and far between. Nobody ever tells you that you are doing enough.

"Looking good, Assistant Professor! Just keep this up, maybe even chill a little now and then, and tenure will be no problem!" - said no Chair ever.

My concern is that resilience in the face of constant rejection, belittling and unkind comparisons of your science to the true rock stars in a Lake Wobegon approach can have a selection effect. Only certain personality types can stand this.

I happen to have one of these personality types but it is not something of any particular credit. I was born and/or made this way by my upbringing. I cannot say anyone helped to train me in this way as an academic scientist*.

So I am at a complete loss as to how to help my trainees with this.

Have you any insights Dear Reader? From your own development as a scientist or as a supervisor of other scientists?

Related Reading: Tales of postdocs past: what did I learn?
*well maybe indirectly. And not in a way I care to extend to any trainee of mine thankyewveerymuch.

71 responses so far

Does it matter how the data are collected?

(by drugmonkey) Sep 12 2016

Commenter jmz4 made a fascinating comment on a prior post:

It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don't.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn't matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn't matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

Additional Reading: The whole board of Sweden's top-ranked university was just sacked because of the Macchiarini scandal

13 responses so far

« Newer posts Older posts »