Search Results for "Your Grant in Review"

Aug 24 2016

Your Grant in Review: Throwing yourself on the mercy of the study section court

A question and complaint from commenter musclestumbler on a prior thread introduces the issue.

So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it's obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.

and extends:

I personally go after R15 and R03 mechanisms because that's all that can be reasonably obtained at my university. ... Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue...

This isn't simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms - a "little kids table" if you will - but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.

The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!

I noted the CSR webpage on study section selection says:

Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.

It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.

I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting "appropriate expertise" when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.

But ultimately we come to the "mercy of the court" aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what "has" to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.

I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.

The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don't just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.

Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.

Sometimes you have to experiment a little with the NIH system. You'd be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.

27 responses so far

May 28 2016

Your Grant in Review: Scientific Premise

Published by under Uncategorized

I am starting to suspect that the Scientific Premise review item will finally communicate overall excitement/boredom to the applicant. This will be something to attend closely when deciding to revise an application or just to start over. 

17 responses so far

May 27 2016

Your Grant in Review: Investigator Independence

superkash started it:

and amplified it:

there was a bit of chatter and then eventually AListScientist asserted:

First, I addressed Independence of a NIH Grant PI to some extent way back in 2007, reposted in 2009.

I, as well as several other colleagues who review grants, have noticed a seemingly sharp uptick in the number of applications coming in from PIs who are more "transitioning" than "transitioned". PIs whose job titles might be something other than "Assistant Professor" and ones who are still in or around the same laboratory or research group in which they have done a big chunk of postdoctoral work. In extreme cases the PI might still be titled "Postdoc" or have trained in the same place essentially since graduate school!

Readers of this blog might conclude that this trend, which I've been noticing for at least the past 3-4 rounds, delights me. And to the extent that it represents a recognition of the problems with junior scientist's making the career transition to independence this does appear a positive step. To the extent that it opens up artificial barriers blocking the next generation of scientists- great.

The slightly more cynical view expressed by colleagues and, admittedly, myself is that this trend has been motivated by IC Program behavior both in capping per-PI award levels and in promoting grant success for New Investigators. In other words that the well-established PIs with very large research groups are thinking that grants for which they would otherwise be the PI will now be more successful with some convenient patsy long-term postdoc at the helm. The science, however, is just the same old stuff of the established group and PI.

I surmise that the tweeting of @superkash was related to this conundrum. I would suggest to newcomers to the NIH system that these issues are still alive and well and contribute in various ways to grant review outcome. We see very clearly in various grant/career related discussion on twitter, this blog and commentary to various NIH outlets that peer scientists have strong ideas on categories of PI that deserve or don't deserve funding. For example in the recent version on CSR's Peer Review website, comments suggest we should keel the yuge labs, keel the ESIs, keel the riffraff noobs and save the politically disconnected. The point being that peer reviewers come with biases for and against the PI(s) (and to lesser extent the other investigators).

The fact that the Investigator criterion is one of the five biggies (and there is no official suggestion that it is of any lesser importance than Approach, Significance or Innovation) permits (and one might say requires) the reviwers to exercise these biases. It also shows that AListScientist's apparent belief that Investigators are not to be evaluated because the applicant University has certified them is incorrect.

The official CSR Guidance on review of the Investigator criterion is posed as a series of questions:

Are the PD/PIs, collaborators, and other researchers well suited to the project? If Early Stage Investigators or New Investigators, or in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance and organizational structure appropriate for the project?

"well suited"
"appropriate experience"
Right there you can see where the independence of the PI might be of interest to the reviewer.

"have they demonstrated an ongoing record of accomplishments"
We what to know what they personally have accomplished. Or caused to have accomplished if you want to natter about PIs not really doing hands on science. The point is, can this PI make the proposed studies happen? Is there evidence that she has done so before? Or is there merely evidence that he has existed as glorified hands in the PIs lab up to this point in time?

"are their leadership approach, governance and organizational structure appropriate for the project?"

Can they lead? Can they boot all the tails hard enough to get this project accomplished? I say that this is an entirely appropriate consideration.

I hope you do as well and I would be interested to hear a counter argument.

I suspect that most of the pushback on this comes from the position of thinking about the Research Assistant Professor who IS good enough. Who HAS operated more or less independently and led projects in the SuperLab.

The question for grant review is, how are we to know? From the record and application in front of us.

I am unable to leave this part off: If you are a RAP or heading to be one as a mid to late stage postdoc, the exhortation to you is to lay down evidence of your independence as best you are able. Ask Associate Professor peers that you know about what possible steps you can take to enhance the optics of you-as-PI on this.

73 responses so far

Feb 11 2016

Your Grant in Review: Power analysis and the Vertebrate Animals Section

As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.

Simplification! Cool, right?

There's a landmine here.

For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.

The notice says:

Summary of Changes
The VAS criteria are simplified by the following changes:

  • A description of veterinary care is no longer required.

  • Justification for the number of animals has been eliminated.

  • A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.


This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".

So back to the old way we must go. Leave space for your power analysis, folks.
If you don't know much about doing a power analysis, this website is helpful:

17 responses so far

Jan 28 2016

Your Grant in Review: Competing Continuation, aka Renewal, Apps

In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

15 responses so far

Jan 25 2016

Your Grant in Review: Skin in the Game

Should people without skin in the game be allowed to review major research grants?

I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists...

On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.

On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.


Would you prefer review by those who are subject to the funding system? Or doesn't it matter?

46 responses so far

Dec 03 2015

Your Grant In Review: Errors of fact from incompetent reviewers

Bjoern Brembs has posted a lengthy complaint about the errors of fact made by incompetent reviewers of his grant application.

I get it. I really do. I could write a similar penetrating expose of the incompetence of reviewers on at least half of my summary statements.

And I will admit that I probably have these thoughts running through my mind on the first six or seven reads of the summary statements for my proposals.

But I'm telling you. You have to let that stuff eventually roll off you like water off the proverbial duck's back. Believe me*.


Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP.
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers.

Speaking for the NIH grant system only, you are an idiot if you expect this level of "expert peer" as the assigned reviewers to each and every one of your applications. I am not going to pretend to be an expert in this issue but even I can suspect that the body of work on this area does not lead each and every person who is "expert" to the same conclusion. And therefore even an expert might disagree with Brembs on what reviewers should "recognize". A less-than-expert is going to be subject to a cursory or rapid reading of related literature or, perhaps, an incomplete understanding from a prior episode of attending to the issue.

As a grant applicant, I'm sorry, but it is your job to make your interpretations clear, particularly if you know there are papers pointing in different directions in the literature.

More 'tude from the Brembster:

For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

These are repeated several times triumphantly as if they are some excellent sick burn. Don't think like this. First, NIH reviewers are not expected to do a lot of outside research reading your papers (or others') to apprehend the critical information needed to appreciate your proposal. Second, NIH reviewers are explicitly cautioned not to follow links to sites controlled by the applicant. DO. NOT. EXPECT. REVIEWERS. TO. READ. YOUR. BLOG! ...or your papers.

With respect to "graduate student level", it will be better for you to keep in mind that many peers who do not work directly in the narrow topic you are proposing to study have essentially a graduate student level acquaintance with your topic. Write your proposal accordingly. Draw the reader through it by the hand.

*Trump voice

79 responses so far

Mar 17 2015

Your Grant in Review: Future Directions

One of the most perplexing thing I have learned about the review of 5 year R01 NIH grant proposals is a species of reviewer that is obsessed with Future Directions.

It was a revelation to me in one of my first few study section meetings that some reviewers really want to see extensive comment on where the project might be heading after the completion of 5 years of work. As in, a whole subheaded paragraph at the end of the Research Plan. This is insane to me.


For the most part, we all recognize that ongoing results in your own lab and in the field at large are going to dictate what is important to pursue five years from now. So speculation about what is coming next is silly.

And especially when I was a relatively inexperienced grant writer who had been getting beat up for "over ambitious" plans contained in a single 5 year plan, well.... I was amazed that people wanted to see even more in a speculative, hand wavey paragraph.

Consequently, I struggle with this. But I have tried to include something about Future Directions in my proposals. Yes, even now that we have only 12 precious pages to describe the actual plans for the current proposal.

I have recently seen a summary statement that describes insufficient attention paid to the Future Directions as the "primary weakness" of the proposal. I cannot even imagine what this reviewer was thinking. How can this be the primary weakness? Unless there is literally nothing else to complain about. And we know that never happens.

70 responses so far

Mar 11 2015

Your Grant in Review: Effort and systems designed for amateur scientists

Since we're discussing the amount of PI salary that should be rightfully paid by the NIH versus a local University lately, I have a grant review scenario to mention.

It is not uncommon to see R01 proposals come in from PIs who say that they will charge the grant for "three months summer salary". As we know, this is likely a scenario where the Professor in question has a 9 month salary from his or her University and is permitted to supplement that with up to three months of salary from extramural support funds.

Let us assume we're talking a normal research plan for an R01 that involves research effort pretty much around the calendar year. We're not talking about something that requires focal field work for a few summer months and then can subside into a much lower level of activity for the rest of the year.

On first glance the reviewer can only assume that the PI's remaining 9 months are being paid by the University to DO SOMETHING. Despite comment from Neuro-conservative about situations that seem very strange and unique, my experience is that Universities put some expectation of non-research activity on that 9 month of salary*.

Unless the PI has specified an expectation of research in their official job description, the reviewer can only assume that the effort on the grant will only be available during the summer.

Such a proposal should be met with the utmost skepticism since the conduct of the research requires ongoing supervision of the staff**, at the very least. Right?

So the grantsmithing advice part of this post is that if you are in this sort of situation, be sure to make very clear what your University explicitly expects in terms of your nine-month-hard-salary time.

From the perspective of our ongoing discussion, how is this all supposed to work? What true amount of brain-second-cycles are available to the project at any given time throughout the year?

Teaching duties tend to be rather inelastic and research duties tend to be highly elastic. I can always put off working on a paper or data analysis for another day. I can pick and choose when to work on a poster or oral presentation. I can't really put off lecture at 8am just because I have some exciting results in the laboratory that I want to write up right now. Grading may be a teeensy bit more flexible but there are clear deadlines...unlike paper submissions and most unlike designing new research projects and/or collaborations. Also very unlike meeting with your grad students and postdocs about various things.

I would suggest that under the 50/50 time scenario proposed by Neuro-conservative, one of the two task demands is going to receive short-shrift in a large number of cases. This will mostly be determined by what type of University the PI is employed within. Those that lean towards research? Well, we all know about how the tenure stool really only has one leg. Research. Conversely, there are very high teaching load institutions that inevitably push research toward the background during the active instructional school year.

In these situations either the NIH is being fleeced to support undergraduate instruction or the undergraduate instruction support system (State general funds and tuition, the latter includes scholarships and the like btw, another interested party) is being fleeced to pay for the NIH's business.

The only ethical situation is when there is perfect balance between the expectations of the respective sources of financial support and the PIs actual distribution of work.

I do wonder how many NIH PIs that have nine month salary support actually achieve the appropriate balance of brain effort devoted to their respective tasks. I bet not many.

*I would like to hear some specific language from people's job descriptions that specify that their hard money effort is supposed to be devoted X amount to research, btw. I know these do exist. How commonly?

**Naturally these sorts of proposals are often coupled with 12 mo of full time effort from trainees or techs which supports the notion that the project is not limited to the summer months.

77 responses so far

Jan 30 2015

Your Grant in Review: Credible

I am motivated to once again point something out.

In ALL of my advice to submit grant applications to the NIH frequently and on a diversity of topic angles, there is one fundamental assumption.

That you always, always, always send in a credible application.

That is all.

17 responses so far

Older posts »