On coming up with multiple ideas for R01 proposals

A question to the blog asked the perennial concern that is raised every time I preach on about submitting a lot of proposals. How does one have enough ideas for that? My usual answer is a somewhat perplexed inability to understand how other scientists do not have more ideas in a given six month interval than they can possible complete in the next 20 years.

I reflected slightly more that usual today and thought of something.

There is one tendency of new grant writers that can be addressed here.

My experience is that early grant writers have a tendency to write a 10 year program of research into their initial R01s. It is perfectly understandable and I've done it myself. Probably still fall into this now and again. A Stock Critique of "too ambitious" is one clue that you may need to think about whether you are writing a 10 year research program rather than a 5 year, limited dollar figure, research project that fits within a broader programmatic plan.

One of the key developments as a grant writer, IMNSHO, is to figure out how to write a streamlined, minimalist grant that really is focused on a coherent project.

When you are doing this properly, it leaves an amazing amount of additional room to write additional, highly-focused proposals on, roughly speaking, the same topic.

35 responses so far

  • Ola says:

    A to the fucking men.
    The danger I think younger grantspersons fall into, is wanting to claim ideas for themselves (I say this as someone who did exactly that). They think if they can just write all this stuff down, then it "belongs" to them and no-one else can work on it. It only takes a few times being scooped to get a deeper appreciation of who else is out there and to scale back on what your research program should really encompass.

  • meshugena313 says:

    indeed. But even when I fell into the newbie trap and wrote my 20 years of work into an R01 proposal, I still had a lifetime of ideas left. It is truly bizarre that someone who has made it this far is short on ideas.

  • neuropolarbear says:

    I agree with the above. But also I think some people have tons of ideas and some people have only a few ideas. Some people are creative and other people are tinkerers.

  • SidVic says:

    Yeah, your 100% correct. Your views on how, what it takes, to succeed are in my estimation on target. Regarding coming up with 3-4 ideas for research. If you don't have that many ideas kicking around, I would submit that you might be in wrong business. The real problem is having 3-4 unassailable, ironclad ideas with the attendant preliminary data and a pedigree in that area of work. A drug monkey just isn't going to get funded by the myocardial metabolism study section no matter what cool (crackpot) idea you present. At least in my experience, while in theory it should work, cross-pollination and all that crap; it doesn't.

    I agree with DM in the broad strokes. You must become very facile at taking the same preliminary data and using it with a new twist, to submit a new proposal. Of combining old grants in new ways, with new collaborators, to produce new plans.

    Problem is- I don't want to do it. A grant every cycle ... Cranking out mediocre papers to keep the numbers up ... Extensive strategic tailoring of the research to areas that are "hot and fundable". Crap, it shouldn't be this way. There should be time to reflect and actually do the research. Although, to be fair, I will admit to a lazy streak.

  • Noncoding Arenay says:

    "You must become very facile at taking the same preliminary data and using it with a new twist, to submit a new proposal."

    This is what I want to achieve at some point, hopefully in the near future. Many times when I look at preliminary data there's so much information in there that it would be almost criminal to just pursue one direction/idea with it.

  • Gumble says:

    "The real problem is having 3-4 unassailable, ironclad ideas with the attendant preliminary data and a pedigree in that area of work. "

    This. Ideas are plentiful, free and disposable. Good ideas are only slightly more rare and expensive. Good ideas that can be transformed into a grant? Your goddamn Mastercard is useless there.

    One part of the solution is to collaborate with others. Put your expertise together with someone else's, get some pilot data together, and there's one more grant you can write that you couldn't have on your own. The ideas that are in the grant don't actually matter, as long as they are defensible based on the literature, supported by preliminary data, tested by solid experiments using faddish techniques in a careful way, and likely to make reviewers say, "hm, interesting idea, I wonder if it's true."

  • @Mtomasson says:

    When I read this first quickly, I misunderstood your point.. Not, "have a shit ton of ideas, and you'll be fine," but rather realize that if you do science well, a few very good ideas go a long way.

    Often, I agree, good advice is to focus the entire grant on what you first thought was Aim 1. That Aim 1 is actually three aims, and the other Aims deserve their own grants.

    Cheers.

  • kevin. says:

    I think putting in terms of what projects and experiments the money will actually be spent on during the project period is very useful. When you have to propose specific experiments that take a certain amount of time, you tend to be more realistic in your R01 proposal.

    But, what about these little starter faculty awards, 30-50K per year in directs for a couple/few years? It's not enough on its own to do anything, really. Do you explain how this funding will contribute to the larger, more expansive research program, probably supported by startup funds or (gasp/hopefully) an R01? Or, do you only talk about the project in scope of the money being requested? Say, I'm asking for four years of $60K, do you only put the project goals in terms of how much the money could actually achieve?

  • drugmonkey says:

    My limited experience with "starter faculty awards" and indeed more than one smaller research foundation focused narrowly on DiseaseCategory is that they are very much looking for how this will lead to future major funding from SomeoneElseButReallyNIH. So in those cases, heck yes one delineates what the money can actually pay for and then discusses how this will lead to strong NIH proposals.

  • Mikka says:

    +1 to SidVic.

    Follow-up question. If submitted to the same study section, how do reviewers receive seeing the same preliminary data supporting new avenues of research? Wouldn't they feel that you are throwing spaghetti at the wall to see what sticks (if not funded yet) or milking it excessively (if you already got an R01 on that prelim data)?

    And in the second case, would reviewers and/or POs complain about potential overlap, lowering the potential for funding?

  • MoBio says:

    As a wag in my lab frequently says: "Ideas are cheap--go into any bar and you can get a dozen for free"

  • drugmonkey says:

    Mikka- there is only one way to find out.

  • JD says:

    The problem is not ideas. It is preliminary data. As an asst prof. I can think of 100 R01 ideas. Probably about 25% of them are actually exciting enough to maybe pique the interest of 3 reviewers. But there's no way in hell I can generate enough prelim data for 25 competitive R01 grants. You need to convince the reviewers your ideas can get done... by you! That's the nearly impossible part.

    Starting out a lab in this funding environment means not many hands in the lab and very very few different projects going on at one time. I have enough data to put together maybe 2 distinct competitive R01s, and that's after 5 years on the job. Or I can write 25 R01s that have <1% chance of being funded, but what's the point? The idea that you can just "repackage" your 2 grantsworth of data into 5 simultaneous NIH applications is nonsense, especially if you find yourself in the same few study sections.

  • rxnm says:

    I've been told, and have noticed, that every time I think of what I think is a "grant idea," it turns out Aim 1 is a "grant idea" and Aim 2 and 3 are slightly worse grant ideas.

    However, there is a real problem for n00bs in that we are much more limited in what is viewed as an acceptable grant for us to write. I have some good grant ideas that I think would be strong grants for someone with a track record (and better projects for me), but as a new person you're fucked. New investigators seem to be jammed in this narrow crack where:

    1. You have to prove you can already do all the experiments, but it has to be something new. As a new lab, the time and personnel to generate sufficient preliminary data is scarce.
    2. It has to build on your previous expertise/work, but has to be totally independent from your previous PI.
    3. It has to to be ambitious, but don't actually be ambitious because you're not "proven."

    I know walking these lines mostly the ability to puke out the expected, plausible bullshit using the artificial, arch, formulaic, and terrible structure and prose style that reviewers have come to expect (excuse me, "grantsmanship")--I mean, we wouldn't want to leave a crunchy piece of carrot in anyone's baby food. But it is also a vast target-rich environment for stock critiques that are proxies for "not this time kid," and there's nothing you can do about that.

  • AcademicLurker says:

    It's definitely easier for established folks to put out multiple proposals with limited preliminary data. I know that my life became significantly easier when I could point to a long list of publications in order to make the point "Look, we can clearly do the experiments we're proposing to do", rather then having to include a pile of proof-of-principle data for every freaking thing we proposed.

  • drugmonkey says:

    I think people need to risk going with feasibility data and minimal direct data far more often, of course. Don't be paralyzed into a defensive crouch.

  • thorazine says:

    How do people regard including published work as preliminary data? I often include figures based on published work in proposals to my local (non-US) funding agencies. I've occasionally had reviewers try to ding me for this, but I always argue that this is idiocy, that one point of prelim data is to show that we have the tools and expertise to do the proposed experiments, and that our data don't go bad upon publication. So far the funding agencies have not penalized me for using our own published data in my applications. Would it be similar in an NIH context?

  • Dave says:

    I think people need to risk going with feasibility data and minimal direct data far more often, of course. Don't be paralyzed into a defensive crouch.

    I agree, but it's easier said than done for noobs, isn't it?

    For example, lets say you have a new transgenic mouse that you would like to study for you first R01. Most people I talk to say that you have to have a fair amount of actual phenotypic data of interest to stand a chance, but what if you only have strong in vitro data and only feasibility data for the animals (i.e. you can knock-out the gene of interest in a tissue-specific way, you have optimized all the surgeries and tests to study the phenotype of interest). Do you just go for it and see what happens?

  • Joe says:

    Thorazine,
    As a reviewer, I don't care if the prelim data is published or not if it is building to a proposed set of experiments that are exciting and significant. However, one way I've seen this go wrong is when the applicant proposes a nice set of experiments with all the controls you would want and several different experimental trials, and then I discover that the applicant has already published the story using the one prelim data figure and none of the other controls or experimental samples. Am I really going to believe that they are going to try to publish the same story again with more variables?
    If it makes sense for getting the info across, it is nicer if published data are in the Significance section and unpublished data are in the Experimental Plan.

  • drugmonkey says:

    How do people regard including published work as preliminary data?

    I put published figures in if appropriate. sure. I've even put in figures from publications from other authors. As recently as...my last submission. Picture, thousand words, etc.

    Do you just go for it and see what happens?

    Yes.

    the applicant has already published the story using the one prelim data figure and none of the other controls or experimental samples. Am I really going to believe that they are going to try to publish the same story again with more variables?

    Tricky, admittedly. We are in a situation that selects for conducting the Preliminary Data on the hottest, most critical aspect with the controls left off for the full-funding stage. Naturally, because you want bang for your buck with preliminary (unfunded) experiments. If it progresses along far enough, you even have a chance at a publication. so then where are you?

    I think the kindly-disposed reviewer chalks this up to process and argues about how it fits into the ongoing (flexible) program of research.

    The smart applicant tries to anticipate this scenario and writes the application accordingly. It is okay to say on a revision "we anticipate completing X, Y and Z that were in the original plan prior to funding, here's what we're going to do to follow up".

    see OP for comments about a streamlined, focused app.

  • MoBio says:

    @Drugmonkey:

    You said: "I've even put in figures from publications from other authors. As recently as...my last submission. Picture, thousand words, etc."

    I've seen this done but would not recommend it--my thoughts when reviewing this in a grant are:

    "Why don't they just cite the relevant literature--if it's crucial for the grant I can look it up"

    "Are they trying to fill up space with another person's work?"

    "This is really cool--oh drat this is not from the PI!"

    ....

  • Grumble says:

    rxnm: "the ability to puke out the expected, plausible bullshit using the artificial, arch, formulaic, and terrible structure and prose style that reviewers have come to expect (excuse me, "grantsmanship")"

    You mean, the "readable" prose style isn't good enough for you? Writing style, from a grantsmanship perspective, means exactly one thing: being very, very clear and very, very concise. Poor structure does not lend itself to either of those.

    "it is also a vast target-rich environment for stock critiques that are proxies for "not this time kid,""

    When I review a grant from a youngish investigator, I'm generally not thinking, "this kid needs to get in line." On the contrary, I usually start out rooting for him/her because chances are I've seen the person's work before in papers or conferences and it's all innovative and wonderful and better than the boring old crap that the old fogies fill their grants with year after year. But far more often than not, there is some big fail in the middle of a newbie's grant, such as proposing 25 years worth of experiments or not including evidence that some difficult new technique actually works in the PI's lab. You can call my reaction a "Stock Critique" all you want, but they are only stock because the flaws they describe are so common.

  • toto says:

    StockCritique, n.: A critique that has been applied to any non-singleton set of individuals that includes me.

  • e-rock says:

    The Grantwriters Workbook (for NIH R01s) says not to include others' data in your prelim data. As a NI/ESI, I'd feel too insecure to do that. It also goes against advice I've received from m'elders. If one has access to SS, perhaps this is a general question one could ask, "how does the SS respond if an applicant shows data from others' published work?" and gauge the answer (including non-verbal cues). It might be context specific.

    As far as ideas for multiple R01s.....this seems to be the problem of over-ambitiousness that creeps in. I have tried to write (basically) a grant app as if it were: "These are all the resources I would need and experiments that would be required for a CNS paper per year. Please give me 2.5 million dollars. Kthanks" It seems that I have to think more incremental than that.

    I think my instinct to do this comes from a few sources. One, I was on a training grant in grad school where the trainers had close ties with industry and for some reason, "incrementation is a barrier to innovation" was a SLOGAN of one of our retreats (this T is no longer active). So I was thinking "f this incrementation crap ... I'm gonna innovate!" and that's how I ended up with 25 years of experiments in Specific Aim 2. And the critique that "each aim could be an R01 in and of itself" in my first R01 Summary Statement. Add to that -- in our journal clubs in grad school, all we discussed were Cell papers. Seriously. I like the journal. I really do. But who can lead a project with 13 figures and an alphabet of panels as a shiny new Assistant Prof ... wtf good does this do us except show us what we cannot possibly accomplish on our own? ... unless, of course, we are given breathing room to spend 3 years on one pub instead of the requisite 3-4 per year. Which end up, of course, in "somewhat average" IF journals ... because we won't keep errjobs if we don't.

    I feel as though I have lifetime's worth of experiments in my brain. I sometimes wish I could ignore all my other responsibilities and write all the grant apps that I could think of for one SS and ask them to just "please pick one so I can get paid next year."

  • […] a recent comment on a post by @drugmonkeyblog, rxnm describes the difficulties of writing a R01 as an early stage […]

  • neuropolarbear says:

    I would not recommend writing an application that is "artificial, arch, formulaic, and terrible structure." That is sure to get you rejected! If someone has been telling you that is good grantsmanship, get a different mentor.

    Formulaic is the only one that might be good, but only because formulaic structure lets people ignore the structure and focus on the ideas. This is not a creative writing contest!

  • Jim Woodgett says:

    @Grumble, I think what rxnm hits on the head is that new applicants are often given completely opposite demands by reviewers when they get their grant reviews back. They are supposed to dance like Travolta and guard the door while writing prose as word perfect as Graham Greene. There can also be code in grant reviews where not following a set of unwritten rules of the study section/panel can make you stand out. It's incredibly frustrating to see young investigators being asked to jump through hoops that they are somehow supposed to absorb through their skins. Biding your time (get in line) and other patronizing elements to reviews such as "not explicit about start-up" are common practice on some panels.

    In Canada, we used to have a ridiculous limitation for new applicants that they could apply for a maximum of 3 years of funding (typically, the term is 5). This led to the first year of setting up and then having to apply mid-way through year 2 for a renewal. The renewal rates were abysmal but shot up when the 3 year limit was removed. This is an incredibly fragile period for investigators and when the panels are populated by more senior people who also feel threatened, there are often small but deterministic biases that work against the new people.

    Of course, some good reviewers take these aspects into account and are fair. But you should see some review feedback!

  • rxnm says:

    artificial, arch, etc:

    -I've seen a lot of successful grants people have shared with me and gotten good advice, I think, and there seem to be 3 main framing mechanisms: fill the gap, save the baby (metaphorically), and cure the thing. Rarely do any have anything to do with the actual point of the research from the PI's perspective as far as I can ever tell. These kinds of introductory framing mechanisms are so conventional and worn as to be inherently pointless. Maybe we could just be able to tick a box of which one we prefer to be: Healer, Innovator, or Solver.

    -Parallel use of semantically empty "action verb" versions of "do" to introduce aims/experiments. This is MBA grade awful.

    -Inventing one or two problems in advance and then solving them a sentence later. Riiighht.... THAT's what's going to go wrong. But at least now that you've written that sentence we know you can solve problems--oh wait, have you been a bench scientist for 12 years with a bunch of papers, and you convinced people to give you a TT job? Maybe you've solved a problem or dealt with an unexpected result before!

    -Structural nonsense: the standard R01 structure is not bad, but there is some crazy shit out there.

    -Expectation that non-hypothesis driven work be formulated as a set of hypotheses. It can be done, of course, but for fucks sake why? Barrier to clarity.

    I am all for concision and clarity, all of the above work against it. As a reviewer, I'd want one page each: 1. Background - state of field, no invented crises. 2. Previous or preliminary results. 3. Experimental approach and methods, no details. 4. Budget. 5. CV: Positions, publication list until you run out of space. No grant should be over 5 pages. I can tell you the 2 page outline of a collab grant I just finished is 10x easier to read and understand than the 20 pages of fluff about how all both PIs are emerging leaders and how we'll change the future and shit like that.

    As for "get in line," the study section I have experience with practically has a stated queuing policy for ESIs. Makes those first couple submissions really feel like time well spent.

    We should have to justify why we want money and show that we used it to produce papers, but I don't think being good at writing grants has anything to do with being good at scientific research or running a lab. And, like going to the dentist or meetings, I don't really mind doing it, and I think I'm ok enough at it to get by. I just think it's tedious and phony, and I wish I didn't have to spend so much time on it.

  • Grumble says:

    rxnm, I agree 100% with every single thing you just said.

    But you also need to look at it from the point of view of the reviewer, who is given the near-impossible task of picking the one out of 10 applications that is going to get funded. I totally disagree with the "get in line" sentiment and I can assure you it's not practiced in every study section - however, given the ridiculously low funding rates, it's easy to see how that mindset could develop among reviewers.

    It's probably impossible to eliminate all Stock Critique bait from an application. But you need to come close, because if you don't, even a reviewer who likes everything else about your application is going to say to herself, "there's no way I can defend this in front of the committee because the other reviewers are going to bring up all these annoying flaws." So she won't even bother trying. She'll hold her fire and go all out to promote/defend the one application that hits on most cylinders and proposes something she's really excited about.

    Finally, it's good that you've "seen a lot of successful grants people have shared with" you, but here's a piece of advice that every new PI should follow: get more than one experienced grant writer to read your grant well before you even think about submitting it, and ask them to be as critical as they can. Here's a true story: a young colleague sent me a grant that was very exciting, but had some notable flaws that would totally have attracted Ye Olde Stockke Critiques. I told the PI what to do to improve it and it got reviewed by the same study section that reviewed one of my grants. The grant I helped with got a phenomenal score. Mine was triaged.

    The moral of the story? Some of the most successful grants are those that have all the excitement of a young investigator with a lot of promise proposing to use the innovative techniques s/he picked up during training, demonstrating obvious creativity - but, critically, avoiding basic n00b mistakes.

    (By the way, I'd be happy to provide the same service for your grant, for the nominal fee of $10,000.)

  • Joe says:

    rxnm,
    "-Expectation that non-hypothesis driven work be formulated as a set of hypotheses."

    I don't expect every part of a plan to be stated as a hypothesis. However, on the SA page there needs to be some overall statement of purpose that the reviewer can latch onto. You can hammer your plan into a single, contrived hypothesis, or you can just state your purpose, but don't expect that a paragraph on the topic will suffice. There needs to be a clear statement that one can easily find when they go back through the stack of proposals a second time.

  • rxnm says:

    Thanks for the perspective, Grumby.

  • […] comment on a recent post from Grumble is a bit of key advice for those seeking funding from the […]

  • journal says:

    Gardening is not just physically and aesthetically rewarding.
    s, the gardening sector remained strong as people turned to the simple satisfaction of home growing, coupled with the produce of cheap,
    healthy vegetables. You may berate yourself because you didn't handle the situations
    effectively.

  • […] analyses, and figuring out how to make the proposal better next time around. If you've got at least two proposals in you then you get them working like a pair card-sharks ruffing in a spades game. One […]

Leave a Reply