Archive for the 'Grant Review' category

Should NIH provide a transcript of the discussion of grants?

Feb 16 2018 Published by under Grant Review, NIH funding

Respected neuroscientist Bita Moghaddam seems to think this would be a good idea.

She then goes on to mention the fact that POs listen in on grant discussion, can take notes and can give the PI a better summary of the discussion that emerges in the Resume of Discussion written by the SRO.

This variability in PO behavior then leads to some variability in the information communicated to the PI. I've had one experience where a PO gave me such chapter and verse on the discussion that it might have been slightly over the line (pre- and post-discussion scores). Maybe two other ones where the PO gave me a very substantial run down. But for the most part POs have not been all that helpful- either they didn't attend or they didn't pay attention that closely or they just didn't care to tell me anything past the "we suggest you revise and resubmit" mantra. She has a good point that it is not ideal that there is so much variability. When I've touched on this issue in the past, I've suggested this is a reason to cultivate as many POs as possible in your grant writing so that you have a chance of getting the "good" ones now and again. Would providing the transcript of discussion help? Maybe?

Or maybe we could just start lobbying the ICs of our fondest acquaintance to take the effort to make the POs behave more consistently.

But I have two problems with Professor Moghaddam's proposals. First of course, is the quashing effect that de-anonymizing (and while a transcript could still be anonymized it is in the same vein of making reviewers hesitate to speak up) may have on honest and open comment. The second problem is that it goes into reinforcing the idea that properly revising a grant application is merely "doing what they said to do". Which then should, the thinking goes, make the grant fundable next time.

This is, as you know, not the way the system is set to work and is a gut-feeling behavior of reviewers that the CSR works hard to counter. I don't know if having the transcript would help or hurt in this regard. I guess it would depend on the mindset of the PI when reading the transcript. If they were looking to merely suss out* the relative ratio of seriousness of various critiques perhaps this would be fine?

__
*My fear is that this would just feed the people who are looking to litigate their review to "prove" that they got screwed and deserve funding.

20 responses so far

Rigor, reproducibility and the good kid

Feb 09 2018 Published by under Grant Review, NIH, NIH funding

I was the good kid.

In my nuclear family, in school and in pre-adult employment.

At one point my spouse was in a very large lab and observed how annoying it is when the PI reads everyone the riot act about the sins of a few lab-jerks.

Good citizens find it weird and off-putting when they feel criticized for the sins of others.

They find it super annoying that their own existing good behavior is not recognized.

And they are enraged when the jerko is celebrated for finally, at last managing to act right for once.

Many of us research scientists feel this way when the NIH explains what they mean by their new initiative to enhance "rigor and reproducibility".

____

"What? I already do that, so does my entire subfield. Wait.....who doesn't do that?" - average good-kid scientist response to hearing the specifics of the R&R initiative.

9 responses so far

SABV in NIH Grant Review

We're several rounds of grant submission/review past the NIH's demand that applications consider Sex As a Biological Variable (SABV). I have reviewed grants from the first round of this obligation until just recently and have observed a few things coming into focus. There's still a lot of wiggle and uncertainty but I am seeing a few things emerge in my domains of grants that include vertebrate animals (mostly rodent models).

1) It is unwise to ignore SABV.

2) Inclusion of both sexes has to be done judiciously. If you put a sex comparison in the Aim or too prominently as a point of hypothesis testing you are going to get the full blast of sex-comparisons review. Which you want to avoid because you will get killed on the usual- power, estrus effects that "must" be there, various caveats about why male and female rats aren't the same - behaviorally, pharmacokinetically, etc etc - regardless of what your preliminary data show.

3) The key is to include both sexes and say you will look at the data to see if there appears to be any difference. Then say the full examination will be a future direction or slightly modify the subsequent experiments.

4) Nobody seems to be fully embracing the SABV concept coming from the formal pronouncements about how you use sample sizes that are half males and half females into perpetuity if you don't see a difference. I am not surprised. This is the hardest thing for me to accept personally and I know for certain sure manuscript reviewers won't go for it either.

Then there comes the biggest categorical split in approach that I have noticed so far.

5a) Some people appear to use a few targeted female-including (yes, the vast majority still propose males as default and females as the SABV-satisfying extra) experiments to check main findings.

5b) The other take is just to basically double everything up and say "we'll run full groups of males and females". This is where it gets entertaining.

I have been talking about the fact that the R01 doesn't pay for itself for some time now.
A full modular, $250K per year NIH grant doesn't actually pay for itself.

the $250K full modular grant does not pay for itself. In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget.

The R01 still doesn't pay for itself and reviewers are getting worse

I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct".

Well, "we're going to duplicate everything in females" as a response to the SABV initiative just administered the equivalent of HGH to this trend. There is approximately zero real world dealing with this in the majority of grants that slap in the females and from what I have seen no comment whatever from reviewers on feasibility. We are just entirely ignoring this.

What I am really looking forward to is the review of grants in about 3 years time. At that point we are going to start seeing competing continuation applications where the original promised to address SABV. In a more general sense, any app from a PI who has been funded in the post-SABV-requirement interval will also face a simple question.

Has the PI addressed SABV in his or her work? Have they taken it seriously, conducted the studies (prelim data?) and hopefully published some things (yes, even negative sex-comparisons)?

If not, we should, as reviewers, drop the hammer. No more vague hand wavy stuff like I am seeing in proposals now. The PI had better show some evidence of having tried.

What I predict, however, is more excuse making and more bad faith claims to look at females in the next funding interval.

Please prove me wrong, scientists in my fields of study.

__
Additional Reading:
NIH's OER blog Open Mike on the SABV policies.
NIH Reviewer Guidance [PDF]

3 responses so far

Undue influence of frequent NIH grant reviewers

Feb 07 2018 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism, NIH funding

A quotation

Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.

referencing this paper on peer review appeared in a blog post by Gary McDowell. It caught my eye when referenced on the twitts.

The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.

NIH Grant review is an inherently conservative process.

The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.

The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.

Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.

This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.

So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.

When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.

My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.

We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!

At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?

I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.

At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?

___
*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.

**[cough]Ginther[cough, cough]

***No idea what this threshold should be, btw. But I think there is one.

18 responses so far

Thing I learned on Twitter

Jan 18 2017 Published by under Grant Review, NIH

Valid #NIHgrant review is determined only by the aspects of the application that I excel at and any other influence is unfair bias.

2 responses so far

How do you respond to not being cited where appropriate?

Oct 10 2016 Published by under Careerism, Grant Review, Peer Review, Tribe of Science

Have you ever been reading a scientific paper and thought "Gee, they really should have cited us here"?

Never, right?

Continue Reading »

25 responses so far

Reminder: The purpose of NIH grant review is not to fix the application

Oct 07 2016 Published by under Grant Review, NIH Careerism, NIH funding

A question on my prior post wanted to know if my assertions were official and written or not.

DM, how do you know that this is the case? I mean, I don't doubt that this is the case, but is it explicitly articulated somewhere?

This was in response to the following statements from me.


They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

The fact that we are not supposed to so much as mention the "f-word", i.e., "funding", has been communicated verbally by every single SRO I have ever reviewed under. They tend to do this at the opening of the meeting and sometimes in the pre-meeting introductory phone call. Many SROs of my acquaintance also spit this out like a reflex during the course of the meeting if they ever hear a reviewer mention it.

The rest of my statements are best evaluated as I wrote them. I.e., by looking at the the NIH review guidance material to see what the reviewers are instructed to do. There is a complete absence of any statements suggesting the job is to help out the applicant. There is a complete absence of any statement suggesting the job is to decide what to fund. The task is described assertively to:

Make recommendations concerning the scientific and technical merit of applications under review, in the form of final written comments and numerical scores.

As far as more positive assertions on the "fixing applications" front go, the most direct thing I can find at present is in the instruction on the "Additional Comments to Applicant" section of the critique template (take a look at that template if you've never reviwed). This document says:

As an NIH reviewer, your written critique should focus on evaluating the scientific and technical merit of an application and not on helping the applicant rewrite the application. But what if you desire to provide some information or tips to the applicant? The Additional Comments to Applicant box is designed just for that purpose.

My emphasis added. In case this isn't clear enough, the following can be taken in the context of the other guidance document comments about reviewing the scientific and technical merit.

Your comments in this box should not be about the scientific or technical merit of an application; do not factor into the final impact score; are not binding; and do not represent a consensus by the review panel. But this type of information may be useful to an applicant.

Clear. Right? The rest of the review is not about being helpful. Comments designed to be helpful to the applicant are not to contribute to the scientific and technical merit review.

Now the comment also asked this:

What fraction of reviewers do you think understand it like you say?

I haven't the foggiest idea. Obviously I think that there is no way anyone who is paying the slightest bit of attention could fail to grasp these simple assertions. And I think that probably, if challenged, the vast majority of reviewers would at least ruefully admit that they understand that helping the applicant is not the job.

But we are mostly professors and academics who have a pronounced native or professionally acquired desire to help people out. As I've said repeatedly on this blog, the vast majority of grant applications have at least something to like about them. And if academic scientists get a little tinge of "gee that sounds interesting", their next instinct is usually "how would I make this better". It's default behavior, in my opinion.

So of course SROs are fighting an uphill battle to keep reviewers focused on what the task is supposed to be.

11 responses so far

Reminder: The purpose of NIH grant review is not to help out the applicant with kindness

Oct 06 2016 Published by under Grant Review, NIH, NIH Careerism

The reviewers of NIH grant applications are charged with helping the Program staff of the relevant Institute or Center of the NIH decide on relative merits of applications as they, the Program staff, consider which ones to select for funding.

Period.

They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

If they can also be kind, help the PI improve her grant for next time, help her improve her grantsmithing in general and/or in passing save someone's career, hey great. Bonus. Perfectly acceptable outcome of the process.

But if the desire to accomplish any of these things compromise the assessment of merit** in a way that serves the needs of the Program staff**, that reviewer is screwing up.

__
*Maybe start a blog if this is your compulsion? I've heard that works for some people who have such urges.

**"merit" in this context is not necessarily what any given reviewer happens to think it is a priori, either. For example, there could be a highly targeted funding opportunity with stated goals that a given reviewer doesn't really agree with. IMV, that reviewer is screwing up if she substitutes her goals for the goals expressed by the I or C in the funding opportunity announcement.

14 responses so far

Responding to Targeted NIH Grant Funding Opportunity Announcements

Sep 29 2016 Published by under Careerism, Grant Review, NIH, NIH Careerism

The NIH FOAs come in many flavors of specificity. Some, usually Program Announcements, are very broad and appear to permit a wide range of applications to fit within them. My favorite example of this is NIDA's "Neuroscience Research on Drug Abuse" PA.

They also come in highly specific varieties, generally as RFAs.

The targeted FOA is my topic for the day because they can be frustrating in the extreme. No matter how finely described for the type, these FOA are inevitably too broad to let each and every interested PI know exactly how to craft her application. Or, more importantly, whether to bother. There is always a scientific contact, a Program Officer, listed so the first thing to do is email or call this person. This can also be frustrating. Sometimes one gets great advice, sometimes it is perplexing.

As always, I can only offer up the way I look at these things.

As an applicant PI facing an FOA that seems vaguely of interest to me, I have several variables that are at play. First, despite the fact that Program may have written the FOA in a particular way, this doesn't mean that they really know what they want. The FOA language may be a committee result or it may just not have been thought that a highly specific type of proposal was necessary to satisfy what goals and motivations existed.

Second, even if they do know what they want in Programville, peer review is always the primary driver. If you can't escape triage it is highly unlikely that Program will fund your application, even if it fits their intent to a T. So as the applicant PI, I have to consider how peers are likely to interpret the FOA and how they are likely to apply it to my application. It is not impossible that the advice and perspective given to the prospective PI by the contact PO flies rather severely in the face of that PIs best estimate of what is likely to occur during peer review. This leaves a conundrum.

How to best navigate peer review and also serve up a proposal that is attractive to Program, in case they are looking to reach down out of the order of review for a proposal that matches what they want.

Finally, as I mention now and again there is an advocacy role for the PI when applying for NIH funding. It is part and parcel of the job of the PI to tell Program what they should be funding. By, of course, serving up such a brilliantly argued application that they see that your take on their FOA is the best take. Even if this may not have been what was their intent in the first place. This also, btw, applies to the study section members. Your job is in part to convince them, not to meet whatever their preconceptions or reading of the FOA might be.

Somehow, the PI has to stew all of these considerations together and come up with a plan for the best possible proposal. Unfortunately, you can miss the mark. Not because your application is necessarily weak or your work doesn't fit the FOA in some objective sense. Merely because you have decided to make choices, gambles and interpretations that have led you in a particular direction, which may very well be the "wrong" direction.

Most severely, you might be rejected without review. This can happen. If you do not meet the PO's idea of being within the necessary scope of what they would ever plan to fund, no matter the score, you could have your application prevented from being routed to the study section.

Alternately, you might get triaged by a panel that just doesn't see it your way. That wonders if you, the idiot PI, was reading the same FOA that they are. It happens.

Finally, you might get a good score and Program may decide to skip over it for lack of responsiveness to their intent. Or you may be in the grey zone and fail to get a pickup because other grants scoring below yours are deemed closer to what they want to fund.

My point for today is that I think this is necessary error in the system. It is not evidence of a wholesale problem with the NIH FOA approach if you shoot wide to the left. If you fail to really understand the intent of the FOA as written. Or if you come away from your initial chat with the PO with a misguided understanding. Or even if you run into the buzzsaw of a review panel that rebels against the FOA.

Personally, I think you just have to take your chances. Arrive at your best understanding of what the FOA intends and how the POs are going to interpret various proposals. Sure. And craft your application accordingly. But you have to realize that you may be missing the point entirely. You may fail to convince anyone of your brilliant take on the FOA's stated goals. This doesn't mean the system is broken.

So take your shots. Offer up your best interpretation on how to address the goals. And then bear down and find the next FOA and work on that. In case your first shot sails over the crossbar.

__
It always fascinates me how fairly wide-flung experiences with NIH funding coalesce around the same issue sometimes. This particular post was motivated by no less than three situations being brought to my attention in the past week. Different ICs, different FOA, different mechanisms and vastly different topics and IC intentions. But to me, the answers are the same.

12 responses so far

Great lens to use on your own grants

Aug 26 2016 Published by under Grant Review, NIH, NIH Careerism

If your NIH grant proposal reads like this, it is not going to do well.

9 responses so far

Older posts »