The good Comrade PhysioProf alerted me to a post on the NIAID blog which links their post on Sample R01 Applications and Summary Statements.
There are four real applications from real PIs posted for your education. Please note that
“The text of these applications is copyrighted. It may be used only for nonprofit educational purposes provided the document remains unchanged and the PI, the grantee organization, and NIAID are credited.”
I had the following response in a comment:
I am always uncertain of the value of posting sample grants that happened to score in the excellent range because it gives an inaccurate view of the process to newcomers. I would be highly interested in seeing you cherry pick some apps in the 18%ile range that had similar grantsmithing excellence (and I know for certain there are plenty) and show where they failed to make the cut and/or were so obviously worse than the examples you show here.
Not to run down these PIs, not at all. It is just that it is counterproductive to always insist that the only factor keeping excellent grants from a fundable score is grantsmithing.
And I do have this as a serious concern with this approach; this is not the first NIH IC to provide a sample application for newcomers. These NIAID ones all scored well, in the 2-7%ile range. This is the likely-fundable range of scores. My longer term readers will recall that it is my position that the grantsmithing that distinguishes a 4%ile from a 14%ile grant is…irrelevant. My bet is that they could have easily put up a few selected near-miss applications and similarly noted (the pdfs are annotated with comments) the excellent grantsmithing. Similarly, you can go through these awEsomeZ! applications and find sections that violate fairly basic grantwriting advice.
For example I just started glancing through Striepen’s Specific Aims (pdf).
Specific Aim1: Dissect the mechanism of apicoplast protein import.
Specific Aim2: Understand the function of the apicoplast ubiquitination pathway.
Specific Aim 3: Discover a comprehensive set of apicoplast proteins and characterize their function.
“Dissect“, I can almost live with. But “Understand the function” and “Discover a comprehensive set..”? Helllloooo. Fishing Expedition StockCritique off the starboard bow, Cap’n!
The “Innovation” section is a paragraph (full app pdf) and starts off:
We would like to argue that our project has been highly innovative and we expect it to continue to be innovative. Innovation in this project is evident in the topic of the research, the concepts and hypotheses to be tested, and the approaches to be used. The apicoplast as a research topic has produced a truly new way to think about Apicomplexa that now permeates our view of their metabolism, development and cell biology. Studying the apicoplast has brought together biologist focused on different organisms that previously had little contact. This cross-fertilization has let parasitologists to consider
“We would like to argue“? Are you kidding me with this passive voice nonsense? And five sentences in without a single specific, nonBSing bit of concrete innovation to latch on to?
The section ends with more passivity.
We feel that overall this investment has paid off (at times in unexpected ways) and that taking the risk to develop new approaches in the future will
keep our experiments fresh and will allow us to ask deeper and deeper mechanistic questions.
Keep in mind that this one is a Year 6 competing continuation application. Now, I’m not saying that it is bad to try to cover up a meandering research program type of application as best you can. Yes we have a tension between the formal project-based approach of the NIH funding system and what is in many cases a de facto program-based funding approach. In this case it is obvious that the productivity in the prior funding interval (or the lab generally) has the reviewers on board to the tune of a 6%ile score. They like the research program, in other words.
But this is by no means a good “Innovation” section. It, in a word, sucks. From a generalized, grantsmithing-advice perspective I mean.
You don’t have to take my word for it, the first reviewer essentially said this in his/her critique. This person found the research to be innovative but said, in essence, that you can’t tell this from the Innovation section of the application. That’s what this bullet point means.
The technological aspects and biological insights that are innovative could have been better highlighted by the investigator.
To be clear, I’m not picking on this application specifically. My bet is I could find similar violations of standard grantsmithing advice in the other ones. And similar things that were noted in the comments on these examples to laud about other applications which scored outside of the money.
There are good features here but Dear Reader I beg you, don’t take these as some sort of GospelTruth template to your glorious funded future.
This Post Has 30 Comments
I don’t quite understand your objection to the verbs “understand” and “discover”.
Recent reading and advice I’ve gotten has said that one needs to keep specific aims relatively general (a bit of an oxymoron), and that they need to be motivated by your overall hypothesis. Say I have a very specific set of methods I plan on using to “understand” the function of the XYZ pathway. Why do you assume this is a fishing expedition? Would you like the verb “probe” better?
I have seen lots of advice on what verbs NOT to use when writing specific aims. I’d really like some advice on which verbs signal eXXcellence!!11!! Or at least are not criticized by peeps like you.
Also, for the life of me I cannot understand the reluctance of basic scientists to acknowledge that fishing expeditions often yield excellent research results, particular in applied disciplines. I believe this hang-up is the main reason why more bio-X-engineers aren’t supported by the NIH.
Striepen’s competing renewal application is actually the most instructive of the four, as it shows what you can get away with on a renewal after a highly productive prior project period. The least informative–in my opinion–is the ESI A2 that got a 10 priority score. That perfect score had little to do with grantsmanship, and a fucketonne to do with “this is this poor fucke ESI’s last shot, so we better make sure she gets the fucken grant”.
It is not “basic scientists” who exhibit this reluctance. Rather, it is a pathological emergent dynamic of certain study sections.
Candid I prefer determine as the Specific Aim verb.
As far as the fishing expedition StockCritique goes, I have little insight about how and why this became common. I am one that doesn’t agree this is a rule-out and if I was running the zoo every app *would* have ~ one Aim’s worth of exploratory/hypothesis generating work (aka, fishing trips).
There is also a time and place to buck the system and get creative with your own apps. It is not good *general* advice in my view, though. Absent strong evidence a given study section is ok with fishing trips it is best to write a straight hypothesis-testing app. Yes, even for methods developing / tool making apps there is a way to write it as hypothesis testing.
Similar principle for the dismal Innovation section of this grant.
As (among other things) a parasitologist studying apicomplexan organisms, this grant is like Hoffman on bicycle day. The novelty factor is kind of “duh”- if you can’t understand why, you don’t know how little is known about the apicomplex.
In my incredible ignorance, I kind of think the fishing trip stock critique is only valid when it concerns (and may perhaps have arisen from) the approach of using a fancy new omics technique without thoughtfully addressing how many hits you are interested in and what you are going to do with them once you’ve got them.
That said, I agree he doesn’t do the best job demonstrating novelty in the writing. I very much had this problem with comps- people kept giving me funded grants, and they kept having poorly articulated hypotheses and fishing trips.
The passive sentences are unusually awkward, and what is with this line: “…brought together biologist focused on different organisms…”?
It makes it sound like he was previously suffering from a non-integrated multiple personality disorder, and now all the different personalities are talking to each other about their favorite furries, except for the one writing about them all in the third person (ok, it’s a simple typo. but I think it’s funnier to read it my way).
I can see that the parts of the grant applications you criticise are just plain vague! I had always thought that I must be specific about what the researchers are hoping to do, accomplish, calculate, make or whatever- emphasising a concrete, hopefully measurable approach. I agree with you Drug Monkey that “fishing expeditions” need to be dressed up as hypothesis testing- if you don’t know what specific technique will be applied because you don’t know how fine- or coarse-grained your data is going to be, state the alternatives tied to the possible outcomes, in concrete terms. For instance if you don’t know the “neck of the woods” that your inquiries may emerge, eg. 2 events per month vs. 200, just say that the time interval for analysis will be by the month if in the range X to Y, and 6-monthly if it is A to B. Then at least the grant examiner knows you can cope with eventualities in a planned, carefully thought out manner. In real life as I know it, colleagues would have shouted down that “Innovation” section and insisted on examples, even if this involved hedging terms a bit to stop competitors from stealing techniques we had developed. Really, grant applications are a lottery- most people who apply are excellent researchers who deserve their programs and jobs to continue, but we have a grants system because funding bodies don’t have infinite resources to share out.
It is a mistake to assume reviewers will know much at all about your pet organisms or about your own version of Bunny Hopperdom.
More hilariousness. They redacted the Research Support part of the biosketches. Anyone that is interested in reading these better know what RePORTER is all about, is all I’m saying.
Here’s the Striepen dude’s listing over the past couple of grant cycles
hmm, unfortunately that direct linky didn’t work.
Any experts (hi becca!) wanna go looking for overlap? With, for example, the “Genetic Dissection of Parasite Metabolism” one? oh hey, nice R56 handout when the original submission didn’t fare so well, dude! hmm, I’m starting to get with PP on this one- nice example of how it goes for the “haves” of the NIH funding world…
Oh, and Candid, I notice the Aims from the newbie Ratner guy use “determine” and “define”. Those I like. I know, I know, the difference between “dissect” and “define” is subtle but the latter just seems more specific to me.
I read a funded app today that used “interrogate”, as in “interrogate the function of…”
I liked that one. But perhaps not much different than “dissect”.
Are enhanced interrogation techniques permitted?, one wonders.
The next proposal I write, my specific aim will be to “waterboard the protein of interest until it proffers up its role in specified pathway.
“To waterboard … until it proffers….” I like it!.
I wish you’d organize a Workshop on the subject…..hahahahahaaaaa
You can almost always learn more about how to do an experiment from a failed experiment than from a successful one. Or to paraphrase: all happy grants/PI’s are alike, every unhappy grant/PI has lots to teach us.
I wager grant reviewers do not agree with you about the value of an unhappy prior grant interval.
I assume that most grant reviewers, at times, don’t make it for the A0 and have to go for an A1. And I I bet that some have to go through the challenge of having to write an entirely new application because A0 + A1 did not go through. I mean, most reviewers, if not all, know what’s going through an “unhappy prior grant interval”. I call this an ADVERSE EVENT. I agree with arrzey that adverse events are unexpected opportunities to learn and be creative to overcome hard times. But they are not spontaneously desired or desirable, no matter how much they teach me…….
Perhaps we are not clearly distinguishing between a grant and a grant *application* in this instance. I am actually fairly interested in the notion of a “failed” interval of support and what this means for future grant getting.
My opinion is that reviewers should assess the scientific merit of a grant in itself (impact/significance/ PI’s scientific potential). Period. If an investigator has had an interval of no support that is irrelevant to the science she/he is proposing to do at this time in point. CSR/NIH is committed to evaluate the science independently and regardless of the adverse events any investigator has gone through (i.e. lost of support for a period of time). In my view what is mandatory is :” grant applications receive fair, independent, expert, and timely reviews — free from inappropriate influences — so NIH can fund the most promising research”.
DM- I’m not sure how best to check for overlap, but there was something… slightly odd I noticed.
Since I am so inexpert at grants, I did not want to bring these things up, but two aspects of the administration side of the grant confused me.
1) I noticed that on the checky-listy thing for regulatory approvals, there is a ‘no’ to the question of vertebrate animal work. Yet the environment section explicitly outlines many animal resources (“The Coverdell Center also has an AAALAC accredited Rodent Vivarium (CRV) in its lowest floor…”). I realize boilerplate environment section stuff has to be fairly typical, but isn’t this a touch egregiously irrelevant? Does nobody even read it if you are from a MRU? Or am I missing something?
2) I’m pretty sure the budget approximation thing is basically par for the course, but can a $225k/year grant really cover all this? It’s 2 months + 2 months summer salary for the PI (figure ~17k + benefits for the summer [1/6th of his 106k yearly salary], and maybe… 8k for the other two months, assuming he is 50% research appointment? seems minimal); 21k + benefits for one grad student; 19k + benefits for another grad student; 32k + benefits for the postdoc; 20k + benefits for the tech working half time on this grant… even assuming a rate of like 14% for benefits (that’s what I cost, as a grad student), I think he’s down to like 92k/year to do the research. Which seems… possible. But challenging, if toxo is half as plastic-ware intensive as plasmodium.
(NB: the state of georgia has a nice thingybob for salaries of state employees, that’s where I got the figures, except for the postdoc who isn’t in the system, for her I just used the university’s postdoc guidelines of ‘strongly recommended’ minimum)… I wonder if she moved on, or if postdocs are in whacky limbo in georgia and thus not state employees or what is up with that… http://www.open.georgia.gov/sta/entryPoint.aud)
Determine, understand, discover … what do y’all think about Demonstrate?
csb- I think demonstrate can work in some contexts. I’m not a huge fan because it sounds more like a canned undergraduate lab than actual science but I can see where it might work.
as to 1), yeah that sounds like cut/paste on the Resources/Facilities. I tend not to pay much attention to Resources/Facilities or Equipment. Either as a grant writer or as a reviewer. Unless there is some unusual demands or the project beyond expected, basic schtuffe for the subfield. Perhaps for an early career PI I might look at it harder but that’s usually only if it seems like other reviewers will make a big issue out of it.
2)- nice work. very strong. do note that the part about 2 months (i.e., calender months, the new way to state effort) is a bit unclear. it may be 2 months total, he may have a 9mo full salary appointment, for example. But yes indeedy, this seems like a boatload of staff being supported on one $225K / yr R01. Naturally when a guy like this has multiple R01/equivalents from NIH and god knows what other support, the personnel line is just grantsmithing. He can shift numbers and people around across projects, have a few NRSAs turn up, get a training grant slot or two, etc. Established investigator, no biggie. But if this were a newbie PI with no other visible means of support? that personnel list might draw a little bit of fire from reviewers. So once again, it is not clear that this is an app to hold up to the light as a *general* prescription for excellent grant smithing.
PP is right. Another version of this kind of thing is “mechanistic good — descriptive bad”.
descriptive is bad….unless it is some sort of gene array, -omics, deep seq bullshit. Then it is the most amazing science in the entire world even though it isn’t really all the “descriptive” of anything either.
Holding my tongue, trying not to say anything….
Why hold your tongue? This is a blog!