Return of the A2 revision for NIH Grants?

Oct 16 2012 Published by under NIH Budgets and Economics, NIH funding, Peer Review

According to a blog entry at Nature the NIH is reconsidering their policy limiting applicants to a single revision of an unfunded grant proposal.

Good. They should do so. Unless, as I've said repeatedly, they can show that the increase in the percentage of grants funded on the first try is driven by genuine "first" tries. Rather than being driven by ideas which have been previously reviewed, in different guise.

46 responses so far

  • They should definitely keep things the way they are.

  • Dr Becca says:

    I like the idea for applications that, say, moved from a triaged A0 to a 25th %ile A1. Not sure I'd be into it if I had to review a grant I'd already canned twice.

  • Dave says:

    As a result of these considerations, we are urging you to return to the two revision system at least for the subset of applications that cross a certain threshold in scoring as A1s.

    This is from the original petition letter, and I wonder if the NIH will go this route. For example, if you are within x points of the payline, you get two more shots. If you are way out (i.e. triaged), then you only have one more go. Seems to be reasonable but could also be fraught with problems.

  • As a result of these considerations, we are urging you to return to the two revision system at least for the subset of applications that cross a certain threshold in scoring as A1s.

    No. This is terrible, because it just reincarnates the horrible holding pattern.

  • drugmonkey says:

    On what basis do you assert the holding pattern has been meaningfully altered, PP?

  • FlyoverProf says:

    This might be a way of preparing for the possibility sequestration, and the associated collapsing of the paylines.

    "This is an resubmission that scored at the 3rd percentile on the -A1 submission but was not funded ......"

  • On the basis of my extensive experience as an applicant and as a reviewer, I have observed that the holding pattern attitude is gone.

  • DrugMonkey says:

    The fact that you are a BSD of science now and no longer get put in the pattern is of no probative value. and unless you are magic at detecting recycled proposals I am no more convinced on the reviewer experience.

  • GAATTC says:

    Just found out today that my latest RO1 application was not discussed, so knowing I would have 2 more shots instead of one is appealing. However, once I get the summary statement back it could be that I can improve the application as best as it can be improved for the next submission. So two attempts and no holding pattern may work out. At least the folks at NIH are thinking about something to help improve things...

  • Odyssey says:

    Unless, as I've said repeatedly, they can show that the increase in the percentage of grants funded on the first try is driven by genuine "first" tries. Rather than being driven by ideas which have been previously reviewed, in different guise.

    It's not clear to me that the current policy has been in effect long enough to be able to determine this.

  • Dave says:

    This might be a way of preparing for the possibility sequestration, and the associated collapsing of the paylines.

    It is very unlikely the sequestration is going to happen but, even if it did, going back to a 3-strike system will not change the absolute number of new grants that are going to be funded.

  • Grumble says:

    GAATTC, I agree. I have a triaged grant with reviews that really aren't all that bad. I can fix it up and send it in, but resubmissions of triaged grants always start with a bit of a disadvantage. That's why it's really absurd that this is its last chance.

    I really don't think the elimination of A2 did anything positive at all.

  • A012 says:

    The A1 only rule does not make sense in my humble mind. As others said it is impossible to distinguish between 6% and 16% based on these reviews. I had a R01 as ESI that got a 15% at the A0 and was not discussed at the A1, in spite of answering all questions - mainly because of reviewers not believing what I referenced (Thank god it made it anyway later as A0). I had another one as a co-PI where we went from 30% to 8% and it was still not funded... So now the sciences has to be considerably changed even though it already was pretty good.
    A better way would maybe to make sure that experts in the field are reviewing the grants and not some idiots who have never been in the field they are judging upon.
    Just my 2 c

  • Spiny Norman says:

    "It is very unlikely the sequestration is going to happen but, even if it did, going back to a 3-strike system will not change the absolute number of new grants that are going to be funded."

    My former senate staffer sibling who now works in the West Wing is not so sanguine, Dave.

  • kant says:

    Precisely

  • Genomic Repairman says:

    I agree with Odyssey, the tea leaves are too fresh to read with some accurate measure if the elimination of the A2 has abated the holding pattern culture. It needs more time.

  • Dave says:

    My former senate staffer sibling who now works in the West Wing is not so sanguine, Dave.

    Guess I'm just hoping.

  • SewScientific says:

    I completely support bringing back the A2, given the dismal paylines. It seems like a ridiculous waste of time (which ultimately means a waste of tax payer money) to revamp a grant to seem new when it is given a good score and simply can not be funded due to historically terrible pay lines. If the pay line in each institute was closer to 20 percentile, then I could see keeping the current two chance for an application policy.

  • drugmonkey says:

    Look, PP is right that the re-introduction of the A2 will have no practical effect. Where I differ with him is that he seems to think the traffic pattern queing has been reduced.

    I'd like to see the evidence.

    I'd like to see study sections focusing on the meat/bones of the application and not ticky-tack it for crap reasons. I see no evidence of that yet. Apparently PP does.

    My continued point is that when the NIH points to increased proportions of their funded grants without an amendment indicator (A1), this is highly flawed evidence. We cannot know how many of these are projects that have been reviewed in one guise or another on previous occasions. We cannot know how the various queing influences might still be at play (for example, study sections seeing repeated applications from the same applicant and finally relenting and/or funding the *person* and the *program* rather than the specific project).

    There are alternative strategies available to fix the traffic pattern circling without bringing back the A2. Ones that I assert would lead to increased efficiency when it comes to investigators submitting endless numbers of applications.

    One such alternative would be to hold over off-payline, near miss scores for funding in subsequent council rounds. Kind of like end-of-year pickups enhanced. this might only be a short-term fix...or it might really address the churning problem. I'm not certain.

    I do know that nothing puts a damper on people shotgunning out the applications like a new R01 award though. The last time I had a grant picked up unexpectedly, it reduced my subsequent submissions by approximately 5-7 R01 applications. :-)

  • drugmonkey says:

    Followup question for the A2 fans....

    <a href="http://projectreporter.nih.gov/project_info_details.cfm?aid=2182554&icde=6754780"Where does it end and by what justification do you draw your cutoff on number of revisions?

  • A012 says:

    To your follow up question:
    reflected! Why just one
    Here is my reasoning:?
    If you are new to the game you start with a somewhat good grant (not enough in this environment), ideally than it get reviewd and discussed and it can improve, In a fair environment - without changing reviewers who find new flaws or ignore your introduction page - your grant gets better. In this current environment that won't cut it yet (if we still had 20-percentile funding rate than it might) but with the last tweaks in A2 you now make it over the wall - or not if it IS crap, and than better start new. Now those PI with good proposals but less experience in grant writing (or buddies in the study section) are shot down because, as has been pointed out, the difference between funding or not can be one dude in the panel marking a 3 instead of a 2.

  • Grumble says:

    "One such alternative would be to hold over off-payline, near miss scores for funding in subsequent council rounds. Kind of like end-of-year pickups enhanced. this might only be a short-term fix...or it might really address the churning problem. I'm not certain. "

    I know for a fact that this actually happens. Someone I know got funded three whole council rounds after the original review round. The score was in the high teens. (And this wasn't an "end of year pick-up" because the start date wasn't just before the end of the FY.)

    "We cannot know how the various queing influences might still be at play (for example, study sections seeing repeated applications from the same applicant and finally relenting and/or funding the *person* and the *program* rather than the specific project). "

    The solution to the problem is to actually fund persons and programs, not grant applications. In grant applications, you are forced to spell out in extravagant detail exactly what experiments you will do for the next 5 years, how you will do them, and how you will interpret them. You know full well that the predictions you make to fill the grant pages are completely inaccurate: you have no idea what the ideal experiment is going to be next year (when you finally get the money), much less in year 5. You know full well that new data is going to force you to shift the focus in (slightly or not-so-slightly) different directions.

    So: WHY are you evaluated on how well you can bullshit about the next five years? It's like being given $1,000,000 (plus indirects) to build a house based on how well you are able to color in a picture of a house.

    The standard answer to my WHY question is: "there isn't a better system." But I don't buy it. The better system is to evaluate scientists based on their records - to fund people and programs, rather than handing out prizes for the best fairy tale. Unlike the stock market, for scientists, past performance DOES predict future results. And no, this does not automatically mean it will be impossible for young scientists to break in: a portion of funding can be specifically allocated to them, based on their own records as students and post-docs (and perhaps also standard grant applications).

  • DrugMonkey says:

    One thing that needs reminding is that the system already works as a hybrid of program based and project based funding even if it is formally supposed to only be the latter.

    So it is a matter of moving the dial over a bit rather than putting radical change in place.

    How would you feel about more liberal use of the R37 mechanism to extend the non competing interval to 10 years?

  • Grumble says:

    Every bit helps, but I think it's going to take a wholesale change in NIH's attitude. The goal here is to face the reality that reduced funding results in more wheel-spinning by scientists: we write endless grant applications to get just one or two funded. This is a waste of time and it is extremely unproductive.

    To fix it, NIH needs to allocate a very large portion of its budget to track record-based grants. A shift of a few million dollars to expand funding under the R37 mechanism is a start, but unless funding levels begin to increase again, ultimately NIH needs to completely restructure the system so that the majority of scientists are supported based on their records, not just a tiny minority.

  • DrugMonkey says:

    How would you set the base target for the number of scientists to support as "programs" an how many non-program scientists to allow in on a rotating/occasional basis through project competition?

    What would be the expectation for the "program" scientists' tenure as protected labs? 10yrs? For life?

  • Dave says:

    To fix it, NIH needs to allocate a very large portion of its budget to track record-based grants.

    In general, I agree with your ideas on this and I think some of the training grants essentially do this already, while they also consider future "potential". But I would have some concerns about how ones track-record is evaluated, especially for the younger researchers. It seems like this would further perpetuate the impact-factor game playing when funds are really tight.

  • DrugMonkey says:

    And pedigree, Dave. Don't forget that part.

    Look, for those who like Grumble's proposals, have you never been on the outside knocking to get in the NIH funding door? Everything about sinecure funding is great if you assume you'd be one of the annointed few. It doesn't look so great if you assume, as I do, that you wouldn't be (or wouldn't have been) one of the easy picks for "program" funding.

  • Grumble says:

    "How would you set the base target for the number of scientists to support as "programs" an how many non-program scientists to allow in on a rotating/occasional basis through project competition?"

    Honestly I don't know. If the NIH every gets serious about a shift in this direction, they would hopefully base the answer to your question on demographic and economic data: how much does it cost to run a lab that meets the minimum productivity requirement? How many labs will meet the criteria and ask for funding? What is the past and expected future turnover rate (retiring/quitting PIs and new PIs)? Of course there is variability everywhere (some very productive labs don't cost very much and vice versa; this varies by field and subfield; etc), which makes these decisions more difficult (but not, I think, impossible).

    "What would be the expectation for the "program" scientists' tenure as protected labs? 10yrs? For life?"

    I wouldn't expect tenure or protection at all. The idea would be to look back 5 to 10 years to examine productivity. If it meets criterion, award $X. If it far exceeds criterion, award $2X or $3X. If it doesn't meet criterion, award $0.5X or, worst case, $0X.

    "But I would have some concerns about how ones track-record is evaluated, "

    The details of evaluation are course is a major issue. Some PIs have 200 papers in the last 10 years, each of them LPUs. Some have 5 or 6, but they're all in glamour mags. One way to evaluate productivity is to have a fairly large group of fellow scientists in roughly similar fields review the applicant's record. Reviewers would be mandated to read the applicant's papers (or perhaps a selection chosen by the applicant) and a statement from the applicant regarding what she contributed to each paper. Then the reviewer would score the applicant based on the reviewer's perception of the applicant's impact on the field.

    Such a review process would probably be far less onerous than typical grant review, which means that more people than just the current 3 could be responsible for in-depth evaluation of each application. Having 8-10 people evaluate applications would probably render the review process far less susceptible to individual reviewers' biases, quirks and incompetences than it is now.

    For younger applicants, it would make sense to include letters from PIs and collaborators, just like the current system for F and K awards.

  • Grumble says:

    "Look, for those who like Grumble's proposals, have you never been on the outside knocking to get in the NIH funding door?"

    Grumble himself has spent a LOT of time knocking on NIH's door.

    Your mistake is in calling this a "sinecure". It's not. Once you get the money, you need to produce, or you're out on your ass. That's in a way no different from the current system - except that once you get the money, you can stop writing grants and instead do some actual science.

  • DrugMonkey says:

    Would having been viewed as unproductive then be counted as a demerit when trying to restart with another project? Or is your intent that people be shucked out of te system forevermore after one negative evaluation?

    (because your budget cut idea will be a death spiral )

  • DrugMonkey says:

    I wouldn't expect tenure or protection at all. The idea would be to look back 5 to 10 years to examine productivity. If it meets criterion, award $X. If it far exceeds criterion, award $2X or $3X. If it doesn't meet criterion, award $0.5X or, worst case, $0X.

    I am not really seeing how your proposal differs from competing renewal....except you have a black box inserted where somehow the review is supposed to go back to the 80s when renewals were a 45% proposition. I guess this could be done just by creating higher renewal targets at the program level.

  • Grumble says:

    No, people shouldn't be chucked out of the system after one negative evaluation. But the next budget should be scaled according to past productivity. If your budget gets cut, you can't do the grand project you'd hoped to do - but at least you can do something, and hopefully convince reviewers in the next round that what you accomplished was productive given the budget you had.

    "I am not really seeing how your proposal differs from competing renewal....except you have a black box inserted where somehow the review is supposed to go back to the 80s when renewals were a 45% proposition."

    It differs GREATLY from the current competing renewal system: the applicant doesn't have to write 13 pages of bullshit describing what he is going to do, all the time knowing he isn't going to do much if any of it! Instead, the applicant can spend that time writing some really good review articles, which, when read by the reviewers along with his/her data papers, will convince the reviewers that the applicant is a provocative, insightful thinker who is making a real contribution to science. What a concept - rewarding scientific effort that actually advances the field, rather than handing out prizes for the best scientific wet dream, as in the current system.

    As for 45% chance of renewal: again, you're think about this in terms of yes-you're-funded vs no-you-aren't-funded. Why not fund everyone (or 75-90% of everyone), but decide how much money they get based on past impact on the field? Such a system could end up with a very similar distribution of grant wealth to what we have today: a few BSD labs get lots of money, there's a fat middle with enough to get by on, and some a the bottom struggle. Except, of course, there would be much less writing of grant application-length fantasy novels.

  • odyssey says:

    the applicant doesn't have to write 13 pages of bullshit describing what he is going to do, all the time knowing he isn't going to do much if any of it!

    I'd be interested to know just how many people you think do this. I don't. My proposals are on what I think I'll be doing during the grant period. Of course science, at least worthwhile science, isn't completely predictable, so I'm well aware what I propose and what actually happens could be different. But to include in a proposal plans you have no intention of pursuing is... what's the word I'm looking for?... oh yeah, fraud.

    Instead, the applicant can spend that time writing some really good review articles, which, when read by the reviewers along with his/her data papers, will convince the reviewers that the applicant is a provocative, insightful thinker who is making a real contribution to science.

    I don't consider flooding the literature with reviews good science.

  • Grumble says:

    My grant proposals are also on what, to the best of my ability to predict, "I think I'll be doing," and I don't think anyone (who is honest) proposes to do what they have no intention of doing. My point is just that I know that "what I think I'll be doing" and what I actually end up doing are going to be different - and you just said you know this too. So, why should scientists be judged on how well they can propose things? All I'm saying is that we should be judged based on how well we do things.

    "Flooding the literature with reviews" isn't necessarily good science, but don't dismiss reviews so easily. Reviews are as essential to science as ideas. Influential ideas become influential not just because of experimental evidence, but because the idea-monger is able to lay out an engaging and convincing argument that summarizes the evidence not just from one experiment, but from across the literature. And that is what the best reviews do.

    What really bugs me is that because I spend so much time writing grants, I have precious little time to try to get my ideas across in reviews. That means my work has less impact than I would like - which does neither me nor the NIH nor the taxpaying public any good.

  • I find it difficult to imagine a less productive use of a scientist's time than "writing some really good review articles".

  • Grumble says:

    Really, CPP?

    Let's take one of the most influential ideas in the drug addiction literature: Berridge/Robinson's incentive sensitization theory. You might love the idea or hate it or don't care about it, but you can't deny that it's been extremely influential. It was proposed in a review. Was that a waste of Berridge and Robinson's time?

    Yes, there are a lot of crap reviews out there. But there are also plenty of exceptions.

  • drugmonkey says:

    but decide how much money they get based on past impact on the field?

    because among other things it puts the feed-forward, rich-get-richer problem on steroids.

    also it could, potentially, be a way to close off certain kinds of science that are expensive (like human subjects' research) and proliferate other kinds that are cheap (chemistry). the degree to which Program would be forced to re-balance your system to keep this from getting out of hand would be the degree to which you protest your program has not "really" been followed.

    My proposals are on what I think I'll be doing during the grant period

    Exactly. Although there is a great deal of variability. Some subareas the reviewers expect greater adherence to the plan and some just want to see a list of CNS papers and don't give a hoot what Aims have been put on the paper.

    I find it difficult to imagine a less productive use of a scientist's time than "writing some really good review articles".

    I interpreted Grumble's remark as saying that effort which currently goes into writing the background/significance/innovation part of the grant proposal would be better exchanged for writing a review. In the past 25page versions, s/he might have a point since one already had a damn good start on a review article at that point....

    So, why should scientists be judged on how well they can propose things?

    On this we agree and I would be far happier to see the review of competing continuations focus more on the papers published and less on the specifics of the Aims previously proposed. This may, however, be a highly variable relationship across study sections at present so it is hard to say how wholesale change might be best accomplished.

    Influential ideas become influential not just because of experimental evidence, but because the idea-monger is able to lay out an engaging and convincing argument that summarizes the evidence not just from one experiment, but from across the literature.

    This bothers me. I really don't agree that scientists who are engaged in the subfield need to be led by the hand to synthesize the emerging data. I mean, sure, reviews are a convenient reminder of the scope of the evidence. And there may be a convenient, professionally constructed figure or two to put in a set of slides. But the idea that someone in the field only puts it all together for herself once you've written your review is nonsense.

    I have precious little time to try to get my ideas across in reviews

    Now this is just starting to sound egotistical and/or driven by an attempt to assert priority for ideas which any number of people are, and have been, discussing as the literature emerges.

  • drugmonkey says:

    You might love the idea or hate it or don't care about it, but you can't deny that it's been extremely influential.

    GrandeTheoryes of Science have the immense danger of devolving a subfield into arguing over how many angels can fit on the head of a pin. Of sucking up all the oxygen (and grant money) in a sort of internal, self-referential fight over these theories or models and the degree to which they generalize, predict, etc.

    When it may be the case that resources would be much better devoted to alternate approaches or just plain generating useful data without any regard for said GrandeTheorye.

  • Dave says:

    I find it difficult to imagine a less productive use of a scientist's time than "writing some really good review articles

    Couldn't agree more with that. Bloody hate review articles. Writing and editing them is a pain in the nuts and is only marginally worse than writing a book chapter.

    Why not fund everyone (or 75-90% of everyone), but decide how much money they get based on past impact on the field?

    Mittens Romney is not happy with this proposal at all. You are proposing a sort of middle-out approach to science funding which upsets the meritocracy that currently exists. A strong middle-class and all that caper. Apart from the obvious questions about how much this would actually cost, there will be significant opposition to such an effort. Plus, past impact is and will be almost impossible to evaluate objectively.

    I mean, I do like your idea and I know where you are coming from, but it's a tough one............

  • Grumble says:

    I'm really surprised by all the hating on review articles.

    First as for, "leading scientists by the hand" to certain conclusions: yes, that's exactly what many review articles do - and I don't have a problem with that because reading such articles is a good way to keep up with fields that are not my own. I can think of any number of sub-genres of neuroscience that I'm quite interested in, but I just don't have time to read the lengthy, highly detailed and very frequent papers that come out on those topics. It's like drinking from a fire hose. So I read reviews.

    Second, not all reviews are simple summaries of the literature. The Robinson/Berridge review is just one example of a review that synthesizes the literature to come up with an entirely new idea. There are many others. Some of them lead to "GrandeTheoryes of Science", some just cause a minor paradigm shift within their fields.

    Third, what is soooo wrong with a "GrandeTheoryes of Science"? Are we or are we not better of with the Grande Theory of Quantum Mechanics? Is it really too bad that quantum physics sucks all the grant dollars away from the poor classical physicists??? This is a really, REALLY bad argument against reviews: both beside the point and dead wrong.

  • drugmonkey says:

    How's the funding in physics these days compared with biomedical science?

  • rs says:

    don’t know, but I guess it is more sane as compared to biomdeical research.

  • Dave says:

    I'm really surprised by all the hating on review articles.

    CPP's point was that they are not the most productive thing for a PI to be doing. That I agree with. Are review articles important and necessary? Yes.

  • Dave says:

    http://www.nature.com/news/the-secrets-of-my-prizewinning-research-1.11606

    All these Nobel prize winners come out saying something very similar almost every fucking year. I think the GFP dudes did the experiments with no official NIH funding and the guy that actually discovered GFP applied to the NIH to extend his studies, got rejected, did not get tenure, and ended up working at a fucking car stealership shuttling customers to and from the shop. The system really worked there didn't it.

    The point is that despite all this, nothing changes.

Leave a Reply