Archive for the 'Grant Review' category

Grantsmack: The logic of hypothesis testing

Aug 26 2015 Published by under Grant Review, NIH, NIH Careerism

NIH grant review obsesses over testing hypotheses. Everyone knows this.

If there is a Stock Critique that is a more reliable way to kill a grant's chances than "There is no discernible hypothesis under investigation in this fishing expedition", I'd like to know what it is.

The trouble, of course, is that once you've been lured into committing to a hypothesis then your grant can be attacked for whether your hypothesis is likely to be valid or not.

A special case of this is when some aspect of the preliminary data that you have included even dares to suggest that perhaps your hypothesis is wrong.

Here's what bothers me. It is one thing if you have Preliminary Data suggesting some major methodological approach won't work. That is, that your planned experiment cannot result in anything like interpretable data that bears on the ability to falsify the hypothesis. This I would agree is a serious problem for funding a grant.

But any decent research plan will have experiments that converge to provide different levels and aspects of testing for the hypothesis. It shouldn't rest on one single experiment or it is a prediction, not a real hypothesis. Some data may tend to support and some other data may tend to falsify the hypothesis. Generally speaking, in science you are not going to get really clean answers every time for every single experiment. If you do.....well, let's just say those Golden Scientist types have a disproportionate rate of being busted for faking data.


If you have one little bit of Preliminary Data in your NIH Grant application that maybe, perhaps is tending to reject your hypothesis, why is this of any different value than if it had happened to support your hypothesis?

What influence should this have on whether it is a good idea to do the experiments to fully test the hypothesis that has been advanced?

Because that is what grant review should be deciding, correct? Whether it is a good idea to do the experiments. Not whether or not the outcome is likely to be A or B. Because we cannot predict that.

If we could, it wouldn't be science.

45 responses so far

Grantsmack: Overambitious

Aug 25 2015 Published by under Grant Review, NIH, NIH Careerism, NIH funding

If we are entering a period of enthusiasm for "person, not project" style review of NIH grants, then it is time to retire the criticism of "the research plan is overambitious".

There was a comment on the Twitters to the effect that this Stock Critique of "overambitious" is a lazy dismissal of an application. This can use some breakdown because to simply dismiss stock criticisms as "lazy" review will fail to address the real problem at hand.

First, it is always better to think of Stock Critique statements as shorthand rather than lazy.

Using the term "lazy" seems to imply that the applicant thinks that his or her grant application deserves a full and meticulous point-by-point review no matter if the reviewer is inclined to award it a clearly-triagable or a clearly-borderline or clearly-fundable score. Not so.

The primary job of the NIH Grant panel reviewer is most emphatically not to help the PI to funding nor to improve the science. The reviewer's job is to assist the Program staff of the I or C which has been assigned for potential funding decide whether or not to fund this particular application. Consequently if the reviewer is able to succinctly communicate the strengths and weaknesses of the application to the other reviewers, and eventually Program staff, this is efficiency, not laziness.

The applicant is not owed a meticulous review.

With this understood, we move on to my second point. The use of a Stock Criticism is an efficient communicative tool when the majority of the review panel agrees that the substance underlying this review consideration is valid. That is, that the notion of a grant application being overambitious is relevant and, most typically, a deficiency in the application. This is, to my understanding, a point of substantial agreement on NIH review panels.

Note: This is entirely orthogonal to whether or not "overambitious" is being applied fairly to a given application. So you need to be clear about what you see as the real problem at hand that needs to be addressed.

Is it the notion of over-ambition being any sort of demerit? Or is your complaint about the idea that your specific plan is in fact over-ambitious?

Or are you concerned that it is unfair if the exact same plan is considered "over-ambitious" for you and "amazingly comprehensive vertically ascending and exciting" when someone else's name is in the PI slot?

Relatedly, are you concerned that this Stock Critique is being applied unjustifiably to certain suspect classes of PI?

Personally, I think "over-ambitious" is a valid critique, given my pronounced affection for the NIH system as project-based, not person-based. In this I am less concerned about whether everything the applicant has been poured into this application will actually get done. I trust PIs (and more importantly, I trust the contingencies at work upon a PI) of any stage/age to do interesting science and publish some results. If you like all of it, and would give a favorable score to a subset that does not trigger the Stock Critique, who cares that only a subset will be accomplished*?

The concerning issue is that a reviewer cannot easily tell what is going to get done. And, circling back to the project-based idea, if you cannot determine what will be done as a subset of the overambitious plan, you can't really determine what the project is about. And in my experience, for any given application, there are going to usually be parts that really enthuse you as a reviewer and parts that leave you cold.

So what does that mean in terms of my review being influenced by these considerations? Well, I suppose the more a plan creates an impression of priority and choice points, the less concern I will have. If I am excited by the vast majority of the experiments, the less concern I will have-if only 50% of this is actually going to happen, odds are good if I am fired up about 90% of what has been described.

*Now, what about those grants where the whole thing needs to be accomplished or the entire point is lost? Yes, I recognize those exist. Human patient studies where you need to get enough subjects in all the groups to have any shot at any result would be one example. If you just can't collect and run that many subjects within the scope of time/$$ requested, well.....sorry. But these are only a small subset of the applications that trigger the "overambitious" criticism.

42 responses so far

Repost: Don't tense up

Aug 07 2015 Published by under Careerism, Grant Review, Grantsmanship, NIH, NIH Careerism

I've been in need of this reminder myself in the past year or so. This originally went up on the blog 25 September, 2011.

If you've been going through a run of disappointing grant reviews punctuated by nasty Third Reviewer comments, you tend to tense up.

Your next proposals are stiff...and jam packed with what is supposed to be ammunition to ward off the criticisms you've been receiving lately. Excessive citation of the lit to defend your hypotheses...and buffer concentrations. Review paper level exposition of your logical chain. Kitchen sink of preliminary data. Exhaustive detail of your alternate approaches.

The trouble is, then your grant is wall to wall text and nearly unreadable.

Also, all that nitpicky stuff? Sometimes it is just post hoc justification by reviewers who don't like the whole thing for reasons only tangentially related to the nits they are picking.

So your defensive crouch isn't actually helping. If you hook the reviewer hard with your big picture stuff they will often put up with a lot of seeming StockCritique bait.

25 responses so far

Thought of the Day

Jun 22 2015 Published by under BlogBlather, Grant Review, Grantsmanship

I cannot tell you how comforting it is to know that no matter the depths and pedantry of my grant geekery, there is always a certain person to be found digging away furiously below me.

9 responses so far

Gender smog in grant review

Jun 19 2015 Published by under Gender, Grant Review, NIH, NIH Careerism

I noticed something really weird and totally unnecessary.

When you are asked to review grants for the NIH you are frequently sent a Word document review template that has the Five Criteria nicely outlined and a box for you to start writing your bullet points. At the header to each section it sometimes includes some of the wording about how you are supposed to approach each criterion.

A recent template I received says under Investigator that one is to describe how the

..investigator’s experience and qualifications make him particularly well-suited for his roles in the project?


12 responses so far

Re-Repost: The funding is the science II, "Why do they always drop the females?"

The NIH has recently issued the first round of guidance on inclusion of Sex as a Biological Variable in future NIH research grants. I am completely behind the spirit of the initiative but I have concerns about how well this is going to work in practice. I wrote a post in 2008 that detailed some of the reasons that have brought us to the situation where the Director of the NIH felt he had to coauthor an OpEd on this topic. I believe these issues are still present, will not be magically removed with new instructions to reviewers and need to be faced head-on if the NIH is to make any actual progress on ensuring SABV is considered appropriately going forward.

The post originally appeared December 2, 2008.

The title quote came from one of my early, and highly formative, experiences on study section. In the course of discussing a revised application it emerged that the prior version of the application had included a sex comparison. The PI had chosen to delete that part of the design in the revised application, prompting one of the experienced members of the panel to ask, quite rhetorically, "Why do they always drop the females?"

I was reminded of this when reading over Dr. Isis' excellent post [Update: Original Sb post lost, I think the repost can be found here] on the, shall we say less pernicious, ways that the course of science is slanted toward doing male-based research. Really, go read that post before you continue here, it is a fantastic description.

What really motivated me, however, was a comment from the always insightful Stephanie Z:

Thank you. That's the first time I've seen someone address the reasons behind ongoing gender disparities in health research. I still can't say as it thrills me (or you, obviously), but I understand a bit better now.

Did somebody ring?

As I pointed out explicitly at least once ([Update: Original 2007 post]), research funding has a huge role in what science actually gets conducted. Huge. In my book this means that if one feels that an area of science is being systematically overlooked or minimized, one might want to take a close look at the manner by which science is funded and the way by which science careers are sustained as potential avenues for systematic remedy.


There are a couple of ways in which the generalized problems with NIH grant review lead to the rhetorical comment with which I opened the post. One very common StockCritique of NIH grant review is that of an "over ambitious" research plan. As nicely detailed in Isis' post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not "well established" and needs to be "compared to the standard gonadectomy models", LOL >sigh<.

Grant reviewers prefer simplicity
Keep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as "does this drug compound ameliorate cocaine addiction?" So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost? Trust me, the grant proposal is going to take fire for "scope of the project".

Another StockCritique to blame is "feasibility". Two points here really. First is the question of Preliminary Data- of course if you have to run more experimental conditions to establish that you might have a meritorious hypothesis, you are less likely to do it with a fixed amount of pilot/startup/leftover money. Better to work on preliminary data for two or three distinct applications over just one if you have the funds. Second aspect has to do with a given PIs experience with the models in question. More opportunity to say "The PI has no idea what s/he is doing methodologically" if s/he has no prior background with the experimental conditions, which are almost always the female-related ones. As we all know, it matters little that the hormonal assays or gonadectomy or whatever procedures have been published endlessly if you don't have direct evidence that you can do it. Of course, more latitude is extended to the more-experienced investigator....but then s/he is less likely to jump into gender-comparisons in a sustained way in contrast to a newly minted PI.

Then there are the various things under grantspersonship. You have limited space in a given type of grant application. The more groups and comparisons, the more you have to squeeze in with respect to basic designs, methods and the interpretation/alternative approaches part. So of course you leave big windows for critiques of "hasn't fully considered...." and "it is not entirely clear how the PI will do..." and "how the hypothesis will be evaluated has not been sufficiently detailed...".


Although research funding plays a huge role in career success, it is only part of the puzzle. Another critical factor is what we consider to be "great" or "exciting" science in our respective fields.

The little people can fill in the details. This is basically the approach of GlamourMagz science. (This is a paraphrase of something the most successful GlamourMagz PI I know actually says.) Cool, fast and hot is not compatible with the metastasizing of experimental conditions that is an inevitable feature of gender-comparison science. Trouble is, this approach tends to trickle down in various guises. Lower (than GlamourMag) impact factor journals sometimes try to upgrade by becoming more NS-like (Hi, J Neuro!). Meticulous science and exacting experimental designs are only respected (if at all) after the fact. Late(r) in someone's career they start getting props on their grant reviews for this. Early? Well the person hasn't yet shown the necessity and profit for the exhaustive designs and instead they just look...unproductive. Like they haven't really shown anything yet.

As we all know splashy CNS pubs on the CV trump a sustained area of contribution in lower journals six ways to Sunday. This is not to say that nobody will appreciate the meticulous approach, they will. Just to say that high IF journal pubs will trump. Always.

So the smart young PI is going to stay away from those messy sex-differences studies. Everything tells her she should. If he does dip a toe, he's more likely to pay a nasty career price.
This is why NIH efforts to promote sex-comparison studies are necessary. Promoting special funding opportunities are the only way to tip the equation even slightly more favorable to the sex-differences side. The lure of the RFA is enough to persuade the experienced PI to write in the female groups. To convince the new PI that she might just risk it this one time.

My suspicion is that it is not enough. Beyond the simple need to take a stepwise approach to the science as detailed by Isis, the career and funding pressures are irresistible forces.

9 responses so far

Thoughts on NIH grant strategy from Associate Professor H. Solo

We spend a fair amount of time talking about grant strategy on this blog. Presumably, this is a reflection of an internal process many of us go through trying to decide how to distribute our grant writing effort so as to maximize our chances of getting funded. After all we have better things to do than to write grants.

So we scrutinize success rates for various ICs, various mechanisms, FOAs, etc as best we are able. We flog RePORTER for evidence of which study sections will be most sympathetic to our proposals and how to cast our applications so as to be attractive. We worry about how to construct our Biosketch and who to include as consultants or collaborators. We obsess over how much preliminary data is enough (and too much*).

This is all well and good and maybe, helps.

But at some level, you have to follow your gut, too. Even when the odds seem overwhelmingly bad, there are going to be times when dang it, you just feel like this is the right thing to do.

Submitting an R01 on very thin preliminary data because it just doesn't work as an R21 perhaps.

Proposing an R03 scope project even if the relevant study section has only one** of them funded on the RePORTER books.

Submitting your proposal when the PO who will likely be handling it has already told you she hates your Aims***.

Revising that application that has been triaged twice**** and sending it back in as a A2asA0 proposal.

I would just advise that you take a balanced approach. Make your riskier attempts, sure, but balance those with some less risky applications too.

I view it as....experimenting.

*Just got a question about presenting too much preliminary data the other day.

**of course you want to make sure there is not a structural issue at work, such as the section stopped reviewing this mechanism two years ago.

***1-2%ile scores have a way of softening the stony cold heart of a Program Officer. Within-payline skips are very, very rare beasts.

****one of my least strategic behaviors may be in revising grants that have been triaged. Not sure I've ever had one funded after initial triage and yet I persist. Less so now than I used to but.....I have a tendency. Hard headed and stupid, maybe.

13 responses so far

NIH Program Officers do not understand what happens during review

May 22 2015 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

It is one of the most perplexing things of my career and I still don't completely understand why this is the case. But it is important for PIs, especially those who have not yet experienced study section, to understand a simple fact of life.

The NIH Program Officers do not completely understand what contributes to the review and scoring of your grant application.

My examples are legion and I have mentioned some of them in prior blog posts over the years.

The recent advice from NIAID on how to get your grant to fit within a modular budget limit.

The advice from a PO that PIs (such as myself) just needed to "write better grants" when I was already through a stint on study section and had read many, many crappy and yet funded grants from more established investigators.

The observation that transitioning investigators "shouldn't take that job" because it was soft money and K grants were figuring heavily in the person's transition/launch plans.

Apparently honest wonder that reviewers do not read their precious Program Announcements and automatically award excellent scores to applications just because they align with the goals of the PA.

Ignorance of the revision queuing that was particularly endemic during the early part of my career (and pretend? ignorance that limiting applications to one revision round made no functional difference in this).

The "sudden discovery" that all of the New Investigator grants during the checkbox era were going to well-established investigators who simply happened not to have NIH funding before, instead of boosting the young / recently appointed investigators.

An almost comically naive belief that study section outcome for grants really is an unbiased reflection of grant merit.

I could go on.

The reason this is so perplexing to me is that this is their job. POs [eta: used to] sit in on study section meetings or listen in on the phone. At least three times a year but probably more often given various special emphasis panels and the assignment of grants that might be reviewed in any of several study sections. They even take notes and are supposed to give feedback to the applicant with respect to the tenor of the discussion. They read any and all summary statements that they care to. They read (or can read) a nearly dizzying array of successful and unsuccessful applications.

And yet they mostly seem so ignorant of dynamics that were apparent to me after one, two or at the most three study section meetings.

It is weird.

The takeaway message for less NIH-experienced applicants is that the PO doesn't know everything. I'm not saying they are never helpful....they are. Occasionally very helpful. Difference between funded and not-funded helpful. So I fully endorse the usual advice to talk to your POs early and often.

Do not take the PO word for gospel, however. Take it under advisement and integrate it with all of your other sources of information to try to decide how to advance your funding strategy.

25 responses so far

Crystal clear grant advice from NIAID

May 21 2015 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

from this Advice Corner on modular budgeting:

As you design your research proposal, tabulate a rough cost estimate. If you are above but near the $250,000 annual direct cost threshold, consider ways to lessen your expenses. Maybe you have a low-priority Specific Aim that can be dropped or a piece of equipment you could rent rather than buy new.

H/t: PhysioProf

Related Reading:

Sample Grants

26 responses so far

Fighting with the New Biosketch format

I have been flailing around, of and on for a few months, trying to write my Biosketch into the new format [Word doc Instructions and Sample].


I am not someone who likes to prance around bragging about "discoveries" and unique contributions and how my lab's work is I am so awesomely unique because, let's face it, I don't do that kind of work. I am much more of a work-a-day type of scientist who likes to demonstrate stuff that has never been shown before. I like to answer what are seemingly obvious questions for which there should be lots of literature but then it turns out that there is not. I like to work on what interests me about the world and I am mostly uninterested in what some gang of screechy monkey GlamourHumpers think is the latest and greatest.


This is getting in the way of my ability to:

Briefly describe up to five of your most significant contributions to science. For each contribution, indicate the historical background that frames the scientific problem; the central finding(s); the influence of the finding(s) on the progress of science or the application of those finding(s) to health or technology; and your specific role in the described work.

Now interestingly, it was someone who works in a way most unlike the way I do that showed me the light. Actually, he gave me the courage to think about ignoring this supposed charge in the sample / instruction document. This person recommended just writing a brief sentence or two about the area of work without trying to contextualize the importance or significance of the "contribution". I believe I actually saw one of the five permitted subheadings on his version that was more or less "And here's some other stuff we work on that wasn't easily categorized with the rest of it."

I am at least starting from this minimalist standpoint. I don't know if I will have the courage to actually submit it like this, but I'm leaning towards doing so.

I have been hearing from quite a number of you that you are struggling with creating this new version of the NIH Biosketch. So I thought I'd open it up to comment and observation. Anyone have any brilliant solutions / approaches to recommend?

One of the things that has been bothering me most about this is that it takes the focus off of your work that is specific to the particular application in question. In the most recent version of the Biosketch, you selected 15 pubs that were most directly relevant to the topic at hand. These may not be your "most significant contributions" but they are the ones that are most significant for the newly proposed studies.

If one is now to list "your most significant contributions", well, presumably some of these may not have much to do with the current application. And if you take the five sections seriously, it is hard to parse the subset of your work that is relevant to one focal R01 sized project into multiple headings and still show now those particular aspects are a significant contribution.

I still think it is ridiculous that they didn't simply make this an optional way to do the Biosketch so as to accommodate those people that needed to talk about non-published scholarly works.

64 responses so far

Older posts »