Archive for the 'NIH funding' category

SABV in NIH Grant Review

We're several rounds of grant submission/review past the NIH's demand that applications consider Sex As a Biological Variable (SABV). I have reviewed grants from the first round of this obligation until just recently and have observed a few things coming into focus. There's still a lot of wiggle and uncertainty but I am seeing a few things emerge in my domains of grants that include vertebrate animals (mostly rodent models).

1) It is unwise to ignore SABV.

2) Inclusion of both sexes has to be done judiciously. If you put a sex comparison in the Aim or too prominently as a point of hypothesis testing you are going to get the full blast of sex-comparisons review. Which you want to avoid because you will get killed on the usual- power, estrus effects that "must" be there, various caveats about why male and female rats aren't the same - behaviorally, pharmacokinetically, etc etc - regardless of what your preliminary data show.

3) The key is to include both sexes and say you will look at the data to see if there appears to be any difference. Then say the full examination will be a future direction or slightly modify the subsequent experiments.

4) Nobody seems to be fully embracing the SABV concept coming from the formal pronouncements about how you use sample sizes that are half males and half females into perpetuity if you don't see a difference. I am not surprised. This is the hardest thing for me to accept personally and I know for certain sure manuscript reviewers won't go for it either.

Then there comes the biggest categorical split in approach that I have noticed so far.

5a) Some people appear to use a few targeted female-including (yes, the vast majority still propose males as default and females as the SABV-satisfying extra) experiments to check main findings.

5b) The other take is just to basically double everything up and say "we'll run full groups of males and females". This is where it gets entertaining.

I have been talking about the fact that the R01 doesn't pay for itself for some time now.
A full modular, $250K per year NIH grant doesn't actually pay for itself.

the $250K full modular grant does not pay for itself. In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget.

The R01 still doesn't pay for itself and reviewers are getting worse

I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct".

Well, "we're going to duplicate everything in females" as a response to the SABV initiative just administered the equivalent of HGH to this trend. There is approximately zero real world dealing with this in the majority of grants that slap in the females and from what I have seen no comment whatever from reviewers on feasibility. We are just entirely ignoring this.

What I am really looking forward to is the review of grants in about 3 years time. At that point we are going to start seeing competing continuation applications where the original promised to address SABV. In a more general sense, any app from a PI who has been funded in the post-SABV-requirement interval will also face a simple question.

Has the PI addressed SABV in his or her work? Have they taken it seriously, conducted the studies (prelim data?) and hopefully published some things (yes, even negative sex-comparisons)?

If not, we should, as reviewers, drop the hammer. No more vague hand wavy stuff like I am seeing in proposals now. The PI had better show some evidence of having tried.

What I predict, however, is more excuse making and more bad faith claims to look at females in the next funding interval.

Please prove me wrong, scientists in my fields of study.

__
Additional Reading:
NIH's OER blog Open Mike on the SABV policies.
NIH Reviewer Guidance [PDF]

3 responses so far

Undue influence of frequent NIH grant reviewers

Feb 07 2018 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism, NIH funding

A quotation

Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.

referencing this paper on peer review appeared in a blog post by Gary McDowell. It caught my eye when referenced on the twitts.

The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.

NIH Grant review is an inherently conservative process.

The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.

The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.

Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.

This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.

So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.

When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.

My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.

We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!

At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?

I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.

At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?

___
*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.

**[cough]Ginther[cough, cough]

***No idea what this threshold should be, btw. But I think there is one.

18 responses so far

The past is prologue: Political NIH interference edition

Jan 24 2017 Published by under NIH, NIH funding, Science Politics, Science Vault

From a prestigious general science journal:

"Important elements in both Senate and the House are showing increasing dissatisfaction over Congress's decade-long honeymoon with medical research....critics are dissatisfied...with the NIH's procedures for supervising the use of money by its research grantees....NIH officials..argued, rather, that the most productive method in financing research is to pick good people with good projects and let them carry out their work without encumbering them...its growth has been phenomenal....[NIH director}: nor do we believe that most scientific groups in the country have an asking and a selling price for their product which is research activity...we get a realistic appraisal of what they need to do the job..the supervisory function properly belongs to the universities and other institutions where the research takes place....closing remarks of the report are:...Congress has been overzealous in appropriating money for health research".

D.S. Greenberg, Medical Research Funds: NIH Path Through Congress Has Developed Troublesome Bumps, Science 13 Jul 1962, Vol. 137, Issue 3524, pp. 115-119
DOI: 10.1126/science.137.3524.115 [link]
__
Previously posted.

9 responses so far

Foreign Applicant Institution NIH Grants

Nov 21 2016 Published by under NIH, NIH funding

The NIH allows non-US Universities and other institutions to apply for NIH grants. I don't pay much attention to this issue so I don't know how many.
What I am curious about is whether the PIs who review have noticed anything about these applications. How are they received in study sections you have attended? Is there a high bar for the unique environment or capabilities? 
I believe I was on study section during a transition where the foreign applications were essentially treated like domestic apps to where there was intense skepticism. This was around the mid-naughties, approximately a decade ago. 
I'm curious what you folks are seeing. 

26 responses so far

Reminder: The purpose of NIH grant review is not to fix the application

Oct 07 2016 Published by under Grant Review, NIH Careerism, NIH funding

A question on my prior post wanted to know if my assertions were official and written or not.

DM, how do you know that this is the case? I mean, I don't doubt that this is the case, but is it explicitly articulated somewhere?

This was in response to the following statements from me.


They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

The fact that we are not supposed to so much as mention the "f-word", i.e., "funding", has been communicated verbally by every single SRO I have ever reviewed under. They tend to do this at the opening of the meeting and sometimes in the pre-meeting introductory phone call. Many SROs of my acquaintance also spit this out like a reflex during the course of the meeting if they ever hear a reviewer mention it.

The rest of my statements are best evaluated as I wrote them. I.e., by looking at the the NIH review guidance material to see what the reviewers are instructed to do. There is a complete absence of any statements suggesting the job is to help out the applicant. There is a complete absence of any statement suggesting the job is to decide what to fund. The task is described assertively to:

Make recommendations concerning the scientific and technical merit of applications under review, in the form of final written comments and numerical scores.

As far as more positive assertions on the "fixing applications" front go, the most direct thing I can find at present is in the instruction on the "Additional Comments to Applicant" section of the critique template (take a look at that template if you've never reviwed). This document says:

As an NIH reviewer, your written critique should focus on evaluating the scientific and technical merit of an application and not on helping the applicant rewrite the application. But what if you desire to provide some information or tips to the applicant? The Additional Comments to Applicant box is designed just for that purpose.

My emphasis added. In case this isn't clear enough, the following can be taken in the context of the other guidance document comments about reviewing the scientific and technical merit.

Your comments in this box should not be about the scientific or technical merit of an application; do not factor into the final impact score; are not binding; and do not represent a consensus by the review panel. But this type of information may be useful to an applicant.

Clear. Right? The rest of the review is not about being helpful. Comments designed to be helpful to the applicant are not to contribute to the scientific and technical merit review.

Now the comment also asked this:

What fraction of reviewers do you think understand it like you say?

I haven't the foggiest idea. Obviously I think that there is no way anyone who is paying the slightest bit of attention could fail to grasp these simple assertions. And I think that probably, if challenged, the vast majority of reviewers would at least ruefully admit that they understand that helping the applicant is not the job.

But we are mostly professors and academics who have a pronounced native or professionally acquired desire to help people out. As I've said repeatedly on this blog, the vast majority of grant applications have at least something to like about them. And if academic scientists get a little tinge of "gee that sounds interesting", their next instinct is usually "how would I make this better". It's default behavior, in my opinion.

So of course SROs are fighting an uphill battle to keep reviewers focused on what the task is supposed to be.

11 responses so far

NIH always jukes the stats in their favor

Oct 04 2016 Published by under Gender, NIH, NIH Careerism, NIH funding

DataHound requested information on submissions and awards for the baby MIRA program from NIGMS. His first post noted what he considered to be a surprising number of applications rejected prior to review. The second post identifies what appears to be a disparity in success for applicants who identify as Asian* compared with those who identify white.

The differences between the White and Asian results are striking. The difference between the success rates (33.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.

There was also a potential sign of a disparity for applicants that identify as female versus male.

Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%

Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%

Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.

Same old, same old. Right? No matter what aspect of the NIH grant award we are talking about, men and white people always do better than women and non-white people.

The man-bites-dog part of the tale involves what NIGMS published on their blog about this.

Basson, Preuss and Lorsch report on the Feedback Loop blog entry dated 9/30/2016 that:

One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award
...
We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution.

Hard to reconcile with DataHound's report which comes from data requested under FOIA, so I presume it is accurate. Oh, and despite small numbers of "Others"* DataHound also noted:

The differences between the White and Other category results are less pronounced but also favored White applicants. The difference between the success rates (33.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) not statistically significant with p = 0.14 although the trend favors White applicants.

Not sure how NIGMS will choose to weasel out of being caught in a functional falsehood. Perhaps "did not observe" means "we took a cursory look and decided it was close enough for government work". Perhaps they are relying on the fact that the gender effects were not statistically significant, as DataHound noted. Women PIs were 19 out of 82 (23.2%) of the funded and 63/218 (28.9%) of the reviewed-but-rejected apps. This is not the way DataHound calculated success rate, I believe, but because by chance there were 63 female apps reviewed-but-rejected and 63 male apps awarded funding the math works out the same.

There appears to be no excuse whatever for the NIGMS team missing the disparity for Asian PIs.

The probability of administrative rejection really requires some investigation on the part of NIGMS. Because this would appear to be a huge miscommunication, even if we do not know where to place the blame for the breakdown. If I were NIGMS honchodom, I'd be moving mountains to make sure that POs were communicating the goals of various FOA fairly and equivalently to every PI who contacted them.

Related Reading.
__
*A small number of applications for this program (403 were submitted, per DataHound's first post) means that there were insufficient numbers of applicants from other racial/ethnic categories to get much in the way of specific numbers. The NIH has rules (or possibly these are general FOIA rules) about reporting on cells that contain too few PIs...something about being able to identify them too directly.

19 responses so far

The NIH has shifted from being an investor in research to a consumer of research

WOW. This comment from dsks absolutely nails it to the wall.

The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.

Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.

Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).

NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.

I have never seen this so clearly, so thanks to dsks for expressing it.

38 responses so far

Bring back the 2-3 year Developmental R01

Sep 19 2016 Published by under Fixing the NIH, NIH, NIH funding

The R21 Mechanism is called the Exploratory/Developmental mechanism. Says so right in the title.

NIH Exploratory/Developmental Research Grant Program ( Parent R21)

In the real world of NIH grant review, however, the "Developmental" part is entirely ignored in most cases. If you want a more accurate title, it should be:

NIH High Risk / High Reward Research Grant Program ( Parent R21)

This is what reviwers favor in my experiences sitting on panels and occasionally submitting an R21 app. Mine are usually more along the lines of developing a new line of research that I think is important rather than being truly "high risk/high reward".

And, as we all know, the R01 application (5 years, full modular at $250K per annum direct costs if you please) absolutely requires a ton of highly specific Preliminary Data.

So how are you supposed to Develop an idea into this highly specific Preliminary Data? Well, there's the R21, right? Says right in the title that it is Developmental.

But....it doesn't work in practice.

So the R01 is an alternative. After all it is the most flexible mechanism. You could submit an R01 for $25K direct costs for one year. You'd be nuts, but you could. Actually you could submit an R03 or R21 for one $25K module too, but with the R01 you would then have the option to put in a competitive renewal to continue the project along.

The only thing stopping this from being a thing is the study section culture that won't accept it. Me, I see a lot of advantages to using shorter (and likely smaller) R01 proposals to develop a new line of work. It is less risky than a 5 year R01, for those that focus on risk/$. It has an obvious path of continuation as a genuinely Developmental attempt. It is more flexible in scope and timing- perhaps what you really need is $100K per year for 3 years (like the old R21) for your particular type of research or job type. It doesn't come laden with quite the same "high risk, high reward" approach to R21 review that biases for flash over solid workmanlike substance.

The only way I see this working is to try it. Repeatedly. Settle in for the long haul. Craft your Specific Aims opening to explain why you are taking this approach. Take the Future Directions blurb and make it really sparkle. Think about using milestones and decision points to convince the reviewers you will cut this off at the end if it isn't turning out to be that productive. Show why your particular science, job category, institute or resources match up to this idea.

Or you could always just shout aimlessly into the ether of social media.

41 responses so far

Continuous Submission Eligibility

Aug 23 2016 Published by under NIH, NIH funding

New thing I learned is that you can check on your continuous submission* status via the Personal Profile tab on Commons. It lists this by each Fiscal Year and gives the range of dates.

It even lists all of your study section participations. In case you don't keep track of that but have a need to use it.

I have been made aware of an apparent variation from the rules recently (6 study sections in an 18 mo interval). Anyone else ever heard of such a thing?

I've used continuous submission only a handful of times, to my recollection. TBH I've gone for long intervals of eligibility not realizing I was eligible because this policy has a long forward tail compared to when you qualify with 6 services / 18 mo.

How about you, Readers? Are you a big user of this privilege? Does it help you out or not so much? Do you never remember you are actually eligible?

__
*As a reminder, continuous submission isn't really continual. You have to get them in by Aug 15, Dec 15 and Apr 15 for the respective Cycles.

23 responses so far

Projected NRSA salary scale for FY2017

NOT-OD-16-131 indicates the projected salary changes for postdoctoral fellows supported under NRSA awards.

Being the visual person that I am...
NRSAFY16-17chart

As anticipated, the first two years were elevated to meet the third year of the prior scale (plus a bit) with a much flatter line across the first three years of postdoctoral experience.

What think you o postdocs and PIs? Is this a fair* response to the Obama overtime rules?

Will we see** institutions (or PIs) where they just extend that shallow slope out for Years 3-7+?

h/t Odyssey and correction of my initial misread from @neuroecology
__
*As a reminder, $47,484 in 2016 dollars equals $39,715 in 2006 dollars, $30,909 in 1996 dollars and $21,590 in 1986 dollars. Also, the NRSA Yr 0 for postdocs was $20,292 for FY1997 and $36,996 for FY2006.

**I bet yes***.

***Will this be the same old jerks that already flatlined postdoc salaries? or will PIs who used to apply yearly bumps now be in a position where they just flatline since year 1 has increased so much?

38 responses so far

« Newer posts Older posts »