Archive for the 'Grant Review' category

Great lens to use on your own grants

Aug 26 2016 Published by under Grant Review, NIH, NIH Careerism

If your NIH grant proposal reads like this, it is not going to do well.

9 responses so far

Your Grant in Review: Throwing yourself on the mercy of the study section court

Aug 24 2016 Published by under Careerism, Grant Review, NIH, NIH Careerism, Uncategorized

A question and complaint from commenter musclestumbler on a prior thread introduces the issue.

So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it's obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.

and extends:

I personally go after R15 and R03 mechanisms because that's all that can be reasonably obtained at my university. ... Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue...

This isn't simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms - a "little kids table" if you will - but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.

The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!

I noted the CSR webpage on study section selection says:

Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.

It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.

I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting "appropriate expertise" when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.

But ultimately we come to the "mercy of the court" aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what "has" to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.

I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.

The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don't just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.

Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.

Sometimes you have to experiment a little with the NIH system. You'd be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.

27 responses so far

Nakamura reports on the ECR program

Jun 17 2016 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism

If I stroke out today it is all the fault of MorganPhD.

Jeffery Mervis continues with coverage of the NIH review situation as it pertains to the disparity for African-American PIs identified in 2011 (that's five years and fifteen funding rounds ago, folks) by the Ginther report.

The main focus for this week is on the Early Career Reviewer program. As you will recall, this blog has advocated continually and consistently for the participation of more junior PIs on grant review panels.

The ECR program was created explicitly to deal with underrepresented groups. However, what happened is that there was immediate opposition which insisted that the ECR program had to be open to all junior faculty/applicants, regardless of representation in the NIH game.

One-quarter of researchers in ECR's first cohort were from minority groups, he notes. “But as we've gone along, there are fewer underrepresented minorities coming into the pool.”
...
Minorities comprise only 13% of the roughly 5100 researchers accepted into the program (6% African-American and 7% Hispanic), a percentage that roughly matches their current representation on study sections.

Ok, but how have the ECR participants fared?

[Nakamura] said ECR alumni have been more than twice as successful as the typical new investigator in winning an R01 grant.

NIIIIIICE. Except they didn't flog the data as hard as one might hope. This is against the entire NI (or ESI?) population.

The pool of successful ECR alumni includes those who revised their application, sometimes more than once, after getting feedback on a declined proposal. That extra step greatly improves the odds of winning a grant. In contrast, the researchers in the comparison group hadn't gone through the resubmission process.

Not sure if this really means "hadn't" or "hadn't necessarily". The latter makes more sense if they are just comparing to aggregate stats. CSR data miners would have had to work harder to get this isolated to those who hadn't revised yet, and I suspect if they had gone to that effort, they could have presented the ESIs who had at least one revision under their belt. But what about the underrepresented group of PIs that are the focus of all this effort?

It's also hard to interpret the fact that 18% of the successful ECRs were underrepresented minorities because NIH did not report the fraction of minorities among ECR alumni applicants. So it is not clear whether African-Americans participating in the program did any better than the cohort as a whole—suggesting that the program might begin to close the racial gap—or better than a comparable group of minority scientists who were not ECR alumni.

SERIOUSLY Richard Nakamura? You just didn't happen to request your data miners do the most important analysis? How is this even possible?

How on earth can you not be keeping track of applicants to ECR, direct requests from SROs, response rate and subsequent grant and reviewing behavior? It is almost as if you want to look like you are doing something but have no interest in it being informative or in generating actionable intelligence.

Moving along, we get a further insight into Richard Nakamura and his position in this situation.

Nakamura worries that asking minority scientists to play a bigger role in NIH's grantsmaking process could distract them from building up their lab, finding stable funding, and earning tenure. Serving on a study section, he says, means that “those individuals will have less time to write applications. So we need to strike the right balance.”

Paternalistic nonsense. The same thing that Scarpa tried to use to justify his purge of Assistant Professors from study sections. My answer is the same. Let them decide. For themselves. Assistant Professors and underrepresented PIs can decide for themselves if they are ready and able to take up a review opportunity when asked. Don't decide, paternalistically, that you know best and will refrain from asking for their own good, Director Nakamura!

Fascinatingly, Mervis secured an opinion that echoes this. So Nakamura will surely be reading it:

Riggs, the only African-American in his department, thinks the program is too brief to help minority scientists truly become part of the mainstream, and may even exacerbate their sense of being marginalized.

“After I sat on the panel, I realized there was a real network that exists, and I wasn't part of that network,” he says. “My comments as a reviewer weren't taken as seriously. And the people who serve on these panels get really nervous about having people … that they don't know, or who they think are not qualified, or who are not part of the establishment.”

If NIH “wants this to be real,” Riggs suggests having early-career researchers “serve as an ECR and then call them back in 2 years and have them serve a full cycle. I would have loved to do that.”

The person in the best position to decide what is good or bad for his or her career is the investigator themself.

This comment also speaks to my objection to the ECR as a baby-intro version of peer review. It isn't necessary. I first participated on study section in my Asst Prof years as a regular ad hoc with a load of about six grants, iirc. Might have been 2 less than the experienced folks had but it was not a baby-trainee experience in the least. I was treated as a new reviewer, but that was about the extent of it. I thought I was taken seriously and did not feel patronized.

__
Related Reading:
Toni Scarpa to leave CSR

More on one Scientific Society’s Response to the Scarpa Solicitation

Your Grant In Review: Junior Reviewers Are Too Focused on Details

The problem is not with review…

Peer Review: Opinions from our Elders

23 responses so far

Your Grant in Review: Power analysis and the Vertebrate Animals Section

Feb 11 2016 Published by under Grant Review, Grantsmanship, NIH funding

As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.

Simplification! Cool, right?

There's a landmine here.

For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.

The notice says:

Summary of Changes
The VAS criteria are simplified by the following changes:

  • A description of veterinary care is no longer required.

  • Justification for the number of animals has been eliminated.

  • A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.

 

This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".

So back to the old way we must go. Leave space for your power analysis, folks.
__
If you don't know much about doing a power analysis, this website is helpful: http://homepage.stat.uiowa.edu/~rlenth/Power/

17 responses so far

Your Grant in Review: Competing Continuation, aka Renewal, Apps

Jan 28 2016 Published by under Grant Review, NIH, NIH Careerism

In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

15 responses so far

Thought of the Day

Oct 16 2015 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

I know this NIH grant game sucks.

I do.

And I feel really pained each time I get email or Twitter messages from one of my Readers (and there are many of you, so this isn't as personal as it may seem to any given Reader) who are desperate to find the sekrit button that will make the grant dollars fall out of the hopper.

I spend soooooo much of my discussion on this blog trying to explain that NOBODY CAN TELL YOU WHERE THE SEKRIT BUTTON IS BECAUSE IT DOESN'T EXIST!!!!!!!!!!!!

Really. I believe this down to the core of my professional being.

Sometimes I think that the problem here is the just-world fallacy at work. It is just so dang difficult to give up on the notion that if you just do your job, the world will be fair. If you do good work, you will eventually get the grant funding to support it. That's what all the people you trained around seemed to experience and you are at least as good as them, better in many cases, so obviously the world owes you the same sort of outcome.

I mean yeah, we all recognize things are terrible with the budget and we expect it to be harder but.....maybe not quite this hard?

I feel it too.

Believing in a just-world is really hard to shed.

69 responses so far

Grantsmack: The logic of hypothesis testing

Aug 26 2015 Published by under Grant Review, NIH, NIH Careerism

NIH grant review obsesses over testing hypotheses. Everyone knows this.

If there is a Stock Critique that is a more reliable way to kill a grant's chances than "There is no discernible hypothesis under investigation in this fishing expedition", I'd like to know what it is.

The trouble, of course, is that once you've been lured into committing to a hypothesis then your grant can be attacked for whether your hypothesis is likely to be valid or not.

A special case of this is when some aspect of the preliminary data that you have included even dares to suggest that perhaps your hypothesis is wrong.

Here's what bothers me. It is one thing if you have Preliminary Data suggesting some major methodological approach won't work. That is, that your planned experiment cannot result in anything like interpretable data that bears on the ability to falsify the hypothesis. This I would agree is a serious problem for funding a grant.

But any decent research plan will have experiments that converge to provide different levels and aspects of testing for the hypothesis. It shouldn't rest on one single experiment or it is a prediction, not a real hypothesis. Some data may tend to support and some other data may tend to falsify the hypothesis. Generally speaking, in science you are not going to get really clean answers every time for every single experiment. If you do.....well, let's just say those Golden Scientist types have a disproportionate rate of being busted for faking data.

So.

If you have one little bit of Preliminary Data in your NIH Grant application that maybe, perhaps is tending to reject your hypothesis, why is this of any different value than if it had happened to support your hypothesis?

What influence should this have on whether it is a good idea to do the experiments to fully test the hypothesis that has been advanced?

Because that is what grant review should be deciding, correct? Whether it is a good idea to do the experiments. Not whether or not the outcome is likely to be A or B. Because we cannot predict that.

If we could, it wouldn't be science.

45 responses so far

Grantsmack: Overambitious

Aug 25 2015 Published by under Grant Review, NIH, NIH Careerism, NIH funding

If we are entering a period of enthusiasm for "person, not project" style review of NIH grants, then it is time to retire the criticism of "the research plan is overambitious".

Updated:
There was a comment on the Twitters to the effect that this Stock Critique of "overambitious" is a lazy dismissal of an application. This can use some breakdown because to simply dismiss stock criticisms as "lazy" review will fail to address the real problem at hand.

First, it is always better to think of Stock Critique statements as shorthand rather than lazy.

Using the term "lazy" seems to imply that the applicant thinks that his or her grant application deserves a full and meticulous point-by-point review no matter if the reviewer is inclined to award it a clearly-triagable or a clearly-borderline or clearly-fundable score. Not so.

The primary job of the NIH Grant panel reviewer is most emphatically not to help the PI to funding nor to improve the science. The reviewer's job is to assist the Program staff of the I or C which has been assigned for potential funding decide whether or not to fund this particular application. Consequently if the reviewer is able to succinctly communicate the strengths and weaknesses of the application to the other reviewers, and eventually Program staff, this is efficiency, not laziness.

The applicant is not owed a meticulous review.

With this understood, we move on to my second point. The use of a Stock Criticism is an efficient communicative tool when the majority of the review panel agrees that the substance underlying this review consideration is valid. That is, that the notion of a grant application being overambitious is relevant and, most typically, a deficiency in the application. This is, to my understanding, a point of substantial agreement on NIH review panels.

Note: This is entirely orthogonal to whether or not "overambitious" is being applied fairly to a given application. So you need to be clear about what you see as the real problem at hand that needs to be addressed.

Is it the notion of over-ambition being any sort of demerit? Or is your complaint about the idea that your specific plan is in fact over-ambitious?

Or are you concerned that it is unfair if the exact same plan is considered "over-ambitious" for you and "amazingly comprehensive vertically ascending and exciting" when someone else's name is in the PI slot?

Relatedly, are you concerned that this Stock Critique is being applied unjustifiably to certain suspect classes of PI?

Personally, I think "over-ambitious" is a valid critique, given my pronounced affection for the NIH system as project-based, not person-based. In this I am less concerned about whether everything the applicant has been poured into this application will actually get done. I trust PIs (and more importantly, I trust the contingencies at work upon a PI) of any stage/age to do interesting science and publish some results. If you like all of it, and would give a favorable score to a subset that does not trigger the Stock Critique, who cares that only a subset will be accomplished*?

The concerning issue is that a reviewer cannot easily tell what is going to get done. And, circling back to the project-based idea, if you cannot determine what will be done as a subset of the overambitious plan, you can't really determine what the project is about. And in my experience, for any given application, there are going to usually be parts that really enthuse you as a reviewer and parts that leave you cold.

So what does that mean in terms of my review being influenced by these considerations? Well, I suppose the more a plan creates an impression of priority and choice points, the less concern I will have. If I am excited by the vast majority of the experiments, the less concern I will have-if only 50% of this is actually going to happen, odds are good if I am fired up about 90% of what has been described.

*Now, what about those grants where the whole thing needs to be accomplished or the entire point is lost? Yes, I recognize those exist. Human patient studies where you need to get enough subjects in all the groups to have any shot at any result would be one example. If you just can't collect and run that many subjects within the scope of time/$$ requested, well.....sorry. But these are only a small subset of the applications that trigger the "overambitious" criticism.

42 responses so far

Repost: Don't tense up

Aug 07 2015 Published by under Careerism, Grant Review, Grantsmanship, NIH, NIH Careerism

I've been in need of this reminder myself in the past year or so. This originally went up on the blog 25 September, 2011.


If you've been going through a run of disappointing grant reviews punctuated by nasty Third Reviewer comments, you tend to tense up.

Your next proposals are stiff...and jam packed with what is supposed to be ammunition to ward off the criticisms you've been receiving lately. Excessive citation of the lit to defend your hypotheses...and buffer concentrations. Review paper level exposition of your logical chain. Kitchen sink of preliminary data. Exhaustive detail of your alternate approaches.

The trouble is, then your grant is wall to wall text and nearly unreadable.

Also, all that nitpicky stuff? Sometimes it is just post hoc justification by reviewers who don't like the whole thing for reasons only tangentially related to the nits they are picking.

So your defensive crouch isn't actually helping. If you hook the reviewer hard with your big picture stuff they will often put up with a lot of seeming StockCritique bait.

25 responses so far

Thought of the Day

Jun 22 2015 Published by under BlogBlather, Grant Review, Grantsmanship

I cannot tell you how comforting it is to know that no matter the depths and pedantry of my grant geekery, there is always a certain person to be found digging away furiously below me.

9 responses so far

Older posts »