Archive for the 'Grant Review' category

How do you respond to not being cited where appropriate?

Oct 10 2016 Published by under Careerism, Grant Review, Peer Review, Tribe of Science

Have you ever been reading a scientific paper and thought "Gee, they really should have cited us here"?

Never, right?

Continue Reading »

25 responses so far

Reminder: The purpose of NIH grant review is not to fix the application

Oct 07 2016 Published by under Grant Review, NIH Careerism, NIH funding

A question on my prior post wanted to know if my assertions were official and written or not.

DM, how do you know that this is the case? I mean, I don't doubt that this is the case, but is it explicitly articulated somewhere?

This was in response to the following statements from me.

They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

The fact that we are not supposed to so much as mention the "f-word", i.e., "funding", has been communicated verbally by every single SRO I have ever reviewed under. They tend to do this at the opening of the meeting and sometimes in the pre-meeting introductory phone call. Many SROs of my acquaintance also spit this out like a reflex during the course of the meeting if they ever hear a reviewer mention it.

The rest of my statements are best evaluated as I wrote them. I.e., by looking at the the NIH review guidance material to see what the reviewers are instructed to do. There is a complete absence of any statements suggesting the job is to help out the applicant. There is a complete absence of any statement suggesting the job is to decide what to fund. The task is described assertively to:

Make recommendations concerning the scientific and technical merit of applications under review, in the form of final written comments and numerical scores.

As far as more positive assertions on the "fixing applications" front go, the most direct thing I can find at present is in the instruction on the "Additional Comments to Applicant" section of the critique template (take a look at that template if you've never reviwed). This document says:

As an NIH reviewer, your written critique should focus on evaluating the scientific and technical merit of an application and not on helping the applicant rewrite the application. But what if you desire to provide some information or tips to the applicant? The Additional Comments to Applicant box is designed just for that purpose.

My emphasis added. In case this isn't clear enough, the following can be taken in the context of the other guidance document comments about reviewing the scientific and technical merit.

Your comments in this box should not be about the scientific or technical merit of an application; do not factor into the final impact score; are not binding; and do not represent a consensus by the review panel. But this type of information may be useful to an applicant.

Clear. Right? The rest of the review is not about being helpful. Comments designed to be helpful to the applicant are not to contribute to the scientific and technical merit review.

Now the comment also asked this:

What fraction of reviewers do you think understand it like you say?

I haven't the foggiest idea. Obviously I think that there is no way anyone who is paying the slightest bit of attention could fail to grasp these simple assertions. And I think that probably, if challenged, the vast majority of reviewers would at least ruefully admit that they understand that helping the applicant is not the job.

But we are mostly professors and academics who have a pronounced native or professionally acquired desire to help people out. As I've said repeatedly on this blog, the vast majority of grant applications have at least something to like about them. And if academic scientists get a little tinge of "gee that sounds interesting", their next instinct is usually "how would I make this better". It's default behavior, in my opinion.

So of course SROs are fighting an uphill battle to keep reviewers focused on what the task is supposed to be.

10 responses so far

Reminder: The purpose of NIH grant review is not to help out the applicant with kindness

Oct 06 2016 Published by under Grant Review, NIH, NIH Careerism

The reviewers of NIH grant applications are charged with helping the Program staff of the relevant Institute or Center of the NIH decide on relative merits of applications as they, the Program staff, consider which ones to select for funding.


They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

If they can also be kind, help the PI improve her grant for next time, help her improve her grantsmithing in general and/or in passing save someone's career, hey great. Bonus. Perfectly acceptable outcome of the process.

But if the desire to accomplish any of these things compromise the assessment of merit** in a way that serves the needs of the Program staff**, that reviewer is screwing up.

*Maybe start a blog if this is your compulsion? I've heard that works for some people who have such urges.

**"merit" in this context is not necessarily what any given reviewer happens to think it is a priori, either. For example, there could be a highly targeted funding opportunity with stated goals that a given reviewer doesn't really agree with. IMV, that reviewer is screwing up if she substitutes her goals for the goals expressed by the I or C in the funding opportunity announcement.

14 responses so far

Responding to Targeted NIH Grant Funding Opportunity Announcements

Sep 29 2016 Published by under Careerism, Grant Review, NIH, NIH Careerism

The NIH FOAs come in many flavors of specificity. Some, usually Program Announcements, are very broad and appear to permit a wide range of applications to fit within them. My favorite example of this is NIDA's "Neuroscience Research on Drug Abuse" PA.

They also come in highly specific varieties, generally as RFAs.

The targeted FOA is my topic for the day because they can be frustrating in the extreme. No matter how finely described for the type, these FOA are inevitably too broad to let each and every interested PI know exactly how to craft her application. Or, more importantly, whether to bother. There is always a scientific contact, a Program Officer, listed so the first thing to do is email or call this person. This can also be frustrating. Sometimes one gets great advice, sometimes it is perplexing.

As always, I can only offer up the way I look at these things.

As an applicant PI facing an FOA that seems vaguely of interest to me, I have several variables that are at play. First, despite the fact that Program may have written the FOA in a particular way, this doesn't mean that they really know what they want. The FOA language may be a committee result or it may just not have been thought that a highly specific type of proposal was necessary to satisfy what goals and motivations existed.

Second, even if they do know what they want in Programville, peer review is always the primary driver. If you can't escape triage it is highly unlikely that Program will fund your application, even if it fits their intent to a T. So as the applicant PI, I have to consider how peers are likely to interpret the FOA and how they are likely to apply it to my application. It is not impossible that the advice and perspective given to the prospective PI by the contact PO flies rather severely in the face of that PIs best estimate of what is likely to occur during peer review. This leaves a conundrum.

How to best navigate peer review and also serve up a proposal that is attractive to Program, in case they are looking to reach down out of the order of review for a proposal that matches what they want.

Finally, as I mention now and again there is an advocacy role for the PI when applying for NIH funding. It is part and parcel of the job of the PI to tell Program what they should be funding. By, of course, serving up such a brilliantly argued application that they see that your take on their FOA is the best take. Even if this may not have been what was their intent in the first place. This also, btw, applies to the study section members. Your job is in part to convince them, not to meet whatever their preconceptions or reading of the FOA might be.

Somehow, the PI has to stew all of these considerations together and come up with a plan for the best possible proposal. Unfortunately, you can miss the mark. Not because your application is necessarily weak or your work doesn't fit the FOA in some objective sense. Merely because you have decided to make choices, gambles and interpretations that have led you in a particular direction, which may very well be the "wrong" direction.

Most severely, you might be rejected without review. This can happen. If you do not meet the PO's idea of being within the necessary scope of what they would ever plan to fund, no matter the score, you could have your application prevented from being routed to the study section.

Alternately, you might get triaged by a panel that just doesn't see it your way. That wonders if you, the idiot PI, was reading the same FOA that they are. It happens.

Finally, you might get a good score and Program may decide to skip over it for lack of responsiveness to their intent. Or you may be in the grey zone and fail to get a pickup because other grants scoring below yours are deemed closer to what they want to fund.

My point for today is that I think this is necessary error in the system. It is not evidence of a wholesale problem with the NIH FOA approach if you shoot wide to the left. If you fail to really understand the intent of the FOA as written. Or if you come away from your initial chat with the PO with a misguided understanding. Or even if you run into the buzzsaw of a review panel that rebels against the FOA.

Personally, I think you just have to take your chances. Arrive at your best understanding of what the FOA intends and how the POs are going to interpret various proposals. Sure. And craft your application accordingly. But you have to realize that you may be missing the point entirely. You may fail to convince anyone of your brilliant take on the FOA's stated goals. This doesn't mean the system is broken.

So take your shots. Offer up your best interpretation on how to address the goals. And then bear down and find the next FOA and work on that. In case your first shot sails over the crossbar.

It always fascinates me how fairly wide-flung experiences with NIH funding coalesce around the same issue sometimes. This particular post was motivated by no less than three situations being brought to my attention in the past week. Different ICs, different FOA, different mechanisms and vastly different topics and IC intentions. But to me, the answers are the same.

12 responses so far

Great lens to use on your own grants

Aug 26 2016 Published by under Grant Review, NIH, NIH Careerism

If your NIH grant proposal reads like this, it is not going to do well.

9 responses so far

Your Grant in Review: Throwing yourself on the mercy of the study section court

Aug 24 2016 Published by under Careerism, Grant Review, NIH, NIH Careerism, Uncategorized

A question and complaint from commenter musclestumbler on a prior thread introduces the issue.

So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it's obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.

and extends:

I personally go after R15 and R03 mechanisms because that's all that can be reasonably obtained at my university. ... Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue...

This isn't simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms - a "little kids table" if you will - but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.

The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!

I noted the CSR webpage on study section selection says:

Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.

It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.

I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting "appropriate expertise" when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.

But ultimately we come to the "mercy of the court" aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what "has" to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.

I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.

The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don't just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.

Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.

Sometimes you have to experiment a little with the NIH system. You'd be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.

27 responses so far

Nakamura reports on the ECR program

Jun 17 2016 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism

If I stroke out today it is all the fault of MorganPhD.

Jeffery Mervis continues with coverage of the NIH review situation as it pertains to the disparity for African-American PIs identified in 2011 (that's five years and fifteen funding rounds ago, folks) by the Ginther report.

The main focus for this week is on the Early Career Reviewer program. As you will recall, this blog has advocated continually and consistently for the participation of more junior PIs on grant review panels.

The ECR program was created explicitly to deal with underrepresented groups. However, what happened is that there was immediate opposition which insisted that the ECR program had to be open to all junior faculty/applicants, regardless of representation in the NIH game.

One-quarter of researchers in ECR's first cohort were from minority groups, he notes. “But as we've gone along, there are fewer underrepresented minorities coming into the pool.”
Minorities comprise only 13% of the roughly 5100 researchers accepted into the program (6% African-American and 7% Hispanic), a percentage that roughly matches their current representation on study sections.

Ok, but how have the ECR participants fared?

[Nakamura] said ECR alumni have been more than twice as successful as the typical new investigator in winning an R01 grant.

NIIIIIICE. Except they didn't flog the data as hard as one might hope. This is against the entire NI (or ESI?) population.

The pool of successful ECR alumni includes those who revised their application, sometimes more than once, after getting feedback on a declined proposal. That extra step greatly improves the odds of winning a grant. In contrast, the researchers in the comparison group hadn't gone through the resubmission process.

Not sure if this really means "hadn't" or "hadn't necessarily". The latter makes more sense if they are just comparing to aggregate stats. CSR data miners would have had to work harder to get this isolated to those who hadn't revised yet, and I suspect if they had gone to that effort, they could have presented the ESIs who had at least one revision under their belt. But what about the underrepresented group of PIs that are the focus of all this effort?

It's also hard to interpret the fact that 18% of the successful ECRs were underrepresented minorities because NIH did not report the fraction of minorities among ECR alumni applicants. So it is not clear whether African-Americans participating in the program did any better than the cohort as a whole—suggesting that the program might begin to close the racial gap—or better than a comparable group of minority scientists who were not ECR alumni.

SERIOUSLY Richard Nakamura? You just didn't happen to request your data miners do the most important analysis? How is this even possible?

How on earth can you not be keeping track of applicants to ECR, direct requests from SROs, response rate and subsequent grant and reviewing behavior? It is almost as if you want to look like you are doing something but have no interest in it being informative or in generating actionable intelligence.

Moving along, we get a further insight into Richard Nakamura and his position in this situation.

Nakamura worries that asking minority scientists to play a bigger role in NIH's grantsmaking process could distract them from building up their lab, finding stable funding, and earning tenure. Serving on a study section, he says, means that “those individuals will have less time to write applications. So we need to strike the right balance.”

Paternalistic nonsense. The same thing that Scarpa tried to use to justify his purge of Assistant Professors from study sections. My answer is the same. Let them decide. For themselves. Assistant Professors and underrepresented PIs can decide for themselves if they are ready and able to take up a review opportunity when asked. Don't decide, paternalistically, that you know best and will refrain from asking for their own good, Director Nakamura!

Fascinatingly, Mervis secured an opinion that echoes this. So Nakamura will surely be reading it:

Riggs, the only African-American in his department, thinks the program is too brief to help minority scientists truly become part of the mainstream, and may even exacerbate their sense of being marginalized.

“After I sat on the panel, I realized there was a real network that exists, and I wasn't part of that network,” he says. “My comments as a reviewer weren't taken as seriously. And the people who serve on these panels get really nervous about having people … that they don't know, or who they think are not qualified, or who are not part of the establishment.”

If NIH “wants this to be real,” Riggs suggests having early-career researchers “serve as an ECR and then call them back in 2 years and have them serve a full cycle. I would have loved to do that.”

The person in the best position to decide what is good or bad for his or her career is the investigator themself.

This comment also speaks to my objection to the ECR as a baby-intro version of peer review. It isn't necessary. I first participated on study section in my Asst Prof years as a regular ad hoc with a load of about six grants, iirc. Might have been 2 less than the experienced folks had but it was not a baby-trainee experience in the least. I was treated as a new reviewer, but that was about the extent of it. I thought I was taken seriously and did not feel patronized.

Related Reading:
Toni Scarpa to leave CSR

More on one Scientific Society’s Response to the Scarpa Solicitation

Your Grant In Review: Junior Reviewers Are Too Focused on Details

The problem is not with review…

Peer Review: Opinions from our Elders

23 responses so far

Your Grant in Review: Power analysis and the Vertebrate Animals Section

Feb 11 2016 Published by under Grant Review, Grantsmanship, NIH funding

As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.

Simplification! Cool, right?

There's a landmine here.

For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.

The notice says:

Summary of Changes
The VAS criteria are simplified by the following changes:

  • A description of veterinary care is no longer required.

  • Justification for the number of animals has been eliminated.

  • A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.


This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".

So back to the old way we must go. Leave space for your power analysis, folks.
If you don't know much about doing a power analysis, this website is helpful:

17 responses so far

Your Grant in Review: Competing Continuation, aka Renewal, Apps

Jan 28 2016 Published by under Grant Review, NIH, NIH Careerism

In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

15 responses so far

Thought of the Day

Oct 16 2015 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

I know this NIH grant game sucks.

I do.

And I feel really pained each time I get email or Twitter messages from one of my Readers (and there are many of you, so this isn't as personal as it may seem to any given Reader) who are desperate to find the sekrit button that will make the grant dollars fall out of the hopper.

I spend soooooo much of my discussion on this blog trying to explain that NOBODY CAN TELL YOU WHERE THE SEKRIT BUTTON IS BECAUSE IT DOESN'T EXIST!!!!!!!!!!!!

Really. I believe this down to the core of my professional being.

Sometimes I think that the problem here is the just-world fallacy at work. It is just so dang difficult to give up on the notion that if you just do your job, the world will be fair. If you do good work, you will eventually get the grant funding to support it. That's what all the people you trained around seemed to experience and you are at least as good as them, better in many cases, so obviously the world owes you the same sort of outcome.

I mean yeah, we all recognize things are terrible with the budget and we expect it to be harder but.....maybe not quite this hard?

I feel it too.

Believing in a just-world is really hard to shed.

69 responses so far

Older posts »