The R01 Doesn't Even Pay for Revisions

Sep 11 2018 Published by under Academics, Careerism, NIH, NIH Careerism, NIH funding

Hard charging early career Glam neuroscientist Kay Tye had an interesting claim on the twitters recently.

The message she was replying to indicated that a recent request for manuscript revisions was going to amount to $1,000, making Kay's costs anywhere from $100,000 to $10,000,000. Big range. Luckily she got more specific.

One Million Dollars.

For manuscript revisions.

Let us recap.

The bog standard NIH "major award" is the R01, offered most generically in the 5-year, $250,000 direct cost per year version. $1,250,000 for a five year major (whoa, congrats dude, you got an R01! You have it made!) award.

Dr. Tye has just informed us that it is routine for reviewers to ask for manuscript (one. single. manuscript.) revisions that amount to $1,000,000 in cost.

Ex-NIGMS Director Jeremy Berg cheer-led (and possibly initiated) a series of NIH analyses and data dumps showing that something on the order of 7 (+/- 2) published papers were expected from each R01 award's full interval of funding. This launched a thousand ships of opinionating on "efficiency" of NIH grant award and how it proves that one grant for everyone is the best use of NIH money. It isn't.

I have frequently hit the productivity zone identified in NIGMS data...and had my competing revisions criticized severely for lack of productivity. I have tripled this on at least one interval of R01 funding and received essentially no extra kudos for good productivity. I would be highly curious to hear from anyone who has had a 5 year interval of R01 support described as even reasonably productive with one paper published.

Because even if Dr. Tye is describing a situation in which you barely invest in the original submission (doubtful), it has to be at least $250,000, right? That plus $1,000,000 in revisions and you end up with at best 1 paper per interval of R01 funding. And it takes you five years to do it.

The Office of Extramural Research showed that the vast majority of NIH-funded PIs hold 1 (>70%) or at most 2 (cumulative >90%) major awards at a time.

NIGMS (and some of my fellow NIH-watchers) have been exceptionally dishonest about interpreting the the efficiency data they produce and slippery as otters about resulting policy on per-PI dollar limitations. Nevertheless, one interpretation of their data is that $750,000 in direct costs per year is maximally efficient. Merely mentioning that an honest interpretation of their data ends up here (and reminding that the NIGMS policy for greybeard insiders was in fact to be about $750,000 per year) usually results in the the sound of sharpening stone on steel farm implements and the smell of burning pitch.

Even that level of grant largesse ("largesse") does not pay for the single manuscript revisions that Professor Tye describes within a single year.

I have zero reason to doubt Professor Tye's characterization, I will note. I am familiar with how Glam labs operate. I am familiar with the circle jerk of escalating high-cost "necessary" experimental demands they gratify each other with in manuscript review. I am familiar with the way extremely well funded labs use this bullshit as a gatekeeping function to eliminate the intellectual competition. I am perhaps overly familiar with Glam science labs in which postdocs blowing $40,000 on single fucked up experiments (because they don't bother to think things through, are sloppy or are plain wasteful) is entirely routine.

The R01 does not pay for itself. It does not pay for the expected productivity necessary to look merely minimally productive, particularly when "high impact publications" are the standard.

But even that isn't the point.

We have this exact same problem, albeit at less cost, all down the biomedical NIH-funded research ranks.

I have noted more than once on this blog that I experience a complete disconnect between what is demanded in peer review of manuscripts at a very pedestrian level of journal, the costs involved and the way R01s that pay for those experiments are perceived come time for competitive renewal. Actually, we can generalize this to any new grant as well, because very often grant reviewers are looking at the productivity on entirely unrelated awards to determine the PI's fitness for the next proposal. There is a growing disconnect, I claim, between what is proposed in the average R01 these days and what it can actually pay to accomplish.

And this situation is being created by the exact same super-group of peers. The people who review my grants also review my papers. And each others'. And I review their grants and their manuscripts.

And we are being ridiculous.

We need to restore normalcy and decency in the conduct of this profession. We need to hold the NIH accountable for its fantasy policy that has reduced the spending capability of the normal average grant award to half of what it was a mere twenty years ago. And for policies that seek to limit productive labs so that we can have more and more funded labs who are crippled in what they can accomplish.

We need to hold each other accountable for fantasy thinking about how much science costs. R01 review should return to the days when "overambitious" meant something and was used to keep proposed scope of work minimally related to the necessary costs and the available funds. And we need to stop demanding an amount of work in each and every manuscript that is incompatible with the way the resulting productivity will be viewed in subsequent grant review.

We cannot do anything about the Glam folks, they are lost to all decency. But we can save the core of the NIH-funded biomedical research enterprise.

If you will only join me in a retreat from the abyss.

31 responses so far

  • MoBio says:

    FWIW I've rarely seen a grant being lauded for exceptional productivity in the prior 5 years. I have piped in on occasion to alert the others that there was, indeed, 'exceptional' productivity.

    Lack of productivity certainly will be a flaw, but there is no certainty that being productive will be a significant factor at the review level (certainly Program Officers are aware of this, though).

  • drugmonkey says:

    I think it is time for clear guidance from NIH on this issue and some enforcement by way of mutual reinforcement of the review culture.

    NIGMS data are a decent place to start, but I'm always happy to see NIH expand that on a per-IC or per-study-section basis if necessary. X number of pubs per Y direct costs per year. Use the intra-quartile range, refer to it frequently. "This study section has, in the past 5 years, given competitive renewals that average Z publications fundable scores and anything lower than N pubs has not been renewed". etc.

    LOL. as if.....

  • Ola says:

    NIH RePorter shows Tye had at least $2.4m Direct Costs as PI for 2018. Her career funding total is >$6.3m DC since becoming independent in 2012. That's over 25 modular annual budgets ($250k ea.) or around 6 whole regular R01s.

    Using Berg's metric of 7 papers per grant (or 1.75 papers per $250k modular annual budget), then Tye should have 44 papers originating from this funding. PubMed shows a total of 47 career papers, with around 20 of these as last author, not counting reviews and editorials since 2012. In other words, 0.8 papers per $250k modular annual budget.

    For comparison my own rate is ~2.8 papers per $250k modular annual budget sustained over the past 15 years. And you're damn right this number is going in my next departmental performance review!

  • drugmonkey says:

    Ola, you are identifying where I get extra mad at Berg and other fans of this supposed efficiency analysis because they make me, of all people, have to defend Glam. There is just no comparing apples to oranges on paper count when you get into the heady atmosphere of a Glam lab. Perhaps lots of projects could potentially find a diamond in the rough now and again. But these day-in-day-out operations with a goal of "publish in Glam" first, and anything to do with specific science questions second, COST MONEY. And a lot of it. For relatively few citable works because six papers are buried in the supplement. And Glam labs aren't only funding this stuff on NIH RPGequivalent direct costs. There are T32 and F32 and HHMI and endowed chairs and foundation grants and god knows what other resources being poured in as well.

  • mH says:

    Is there even a pretense of honesty in how PIs with funding from multiple sources allocate credit for pubs to grants when demonstrating productivity?

  • A Salty Scientist says:

    There is just no comparing apples to oranges on paper count when you get into the heady atmosphere of a Glam lab.

    I know that you've brought up the negative consequences of Glam too many times to count. Should reviewers be critiquing productivity per grant (or funding level), and if so, would citations per grant be a better metric.? And if this is not something reviewers should consider, is this something the Program should?

  • drugmonkey says:

    Is there even a pretense of honesty in how PIs with funding from multiple sources allocate credit for pubs to grants when demonstrating productivity?

    should there be? I mean if the erosion in purchasing power of one grant means the PI has to have two grants now for the same purpose, what's the difference?

    say, remember in the old days when Acknowledgements would just thank "NIH for funding" without specifying the grant award number?

  • mH says:

    Of course. If reviewers are assessing productivity relative to resources and I produce a list of 25 papers "resulting from" my R01 without acknowledging that my HHMI award paid for half the reagents, an endowed chair (or hard salary) that paid me, glam fellowships paid my postdocs, and a progam or training grant paid my students, it's a lie. Isn't this your whole beef with universities?

    The only thing that makes sense is all pubs / all support.

  • A Salty Scientist says:

    If reviewers are assessing productivity relative to resources...

    Or NIH could just chuck the Investigator criterion and have reviewers judge the rest of grant at hand like NSF does (where Investigator is largely +/- are they qualified to do the work). Of course, that would require walking back from the recent people-not-projects approach.

  • drugmonkey says:

    it's a lie.

    Absolutely.

    Isn't this your whole beef with universities?

    Um, no? But yes, refusing to account for all the costs that go into a product attributed to NIH support and using that flawed analysis to support policy is a pet peeve of mine.

    The only thing that makes sense is all pubs / all support.

    Agreed.

    where Investigator is largely +/- are they qualified to do the work

    This is a potentially confusing conflation of Investigator criterion with the assessment of productivity. Or maybe I'm not being clear about the use of productivity in the specific case of competing renewals (where a list of pubs produced is included and therefore becomes highly focal) vs generally when the PI is assessed?

  • drugmonkey says:

    would citations per grant be a better metric.?

    I don't see how it could be. Citations depend on subfield size and are also related to Glam and PI rep. If we were awarding NIH funds on citations, we'd all be working on cancer in the end.

  • Neil Harrison says:

    There is one very specific thing I would like to add to this discussion. I am a bit old school, but let me know what you think of my argument. Productivity should be about ACTUALLY ANSWERING THE QUESTION THAT YOU SET OUT TO SOLVE. Hardly anyone does this.

    We always used to say "the Dean can't read, but he can count" to explain promotions in the universities. What we meant was because the Dean is very busy, primarily doing Evil, he/she appoints a legion of Evil Minions to the committees, who can't think critically about science.

    This approach then crept into study sections. First, people just counted papers. Later on, (and I was guilty of this as a younger reviewer), we counted papers in Glam journals as being more important than, say, the house journal of a sub-field, and we gave the Glams extra weighting, so that was good for Kay Tye and her ilk (I am, I should say, a great admirer of her work!).

    So it was about Impact. Unfortunately this got a bit out of hand over the years to the point where people (especially Glam Labs) could propose anything in the grant, then do more or less whatever they felt like with the latest Trendy Technical Toy, publish in a Hot Journal and get the grant renewed - independent of whether they solved or even addressed the question.

    Three separate times in my career I have proposed a grant that was funded, posing a very specific but quite difficult question, and then we were lucky enough to be able to actually solve it, and write it up in ONE or at most TWO papers that were very comprehensive and (I think) elegant. Of course, these have become highly cited, even though they were in The Journal of Neuroscience and not in a Super Hot Journal. Renewals of the grant/s were unsuccessful because of "lack of productivity", and so I simply moved on, had another idea and repeated the process. I didn't emote about it. You just have to accept it and do something else.

    I think you can see the absurdity of this situation though and the problems it has created. Framing a distinct hypothesis and then actually testing it has become a lost art in the days of "omics", "opto" and sundry other fads. Hypercompetition and over-complexity are rampant, especially within the neurosciences, and as a result many technologically driven scientists really don't know their arse from their elbow when it comes to actual problem solving skills.

    What I propose is that study sections need to engage their brains to evaluate productivity. The frontal parts, I mean, not the lizard bits that you generally need to keep a lab running and to sequester or steal resources from others. The question has to be: "did they solve the problem?" and "was it actually important and interesting?" and "is it being cited already?" (this is where the modern metrics come in). A certain amount of experience and mature judgement goes into this process but it beats counting.

    I am happy to discuss this with anyone. No need for anonymity here. I am a complete nobody of course - but many of my papers have been cited hundreds or thousands of times, which means that what I have written was important to someone, independent of how many tweets or likes I don't get. I do stand by my opinions and I am happy to defend this position. It may be Old School, but it is also fundamentally the most sound way to evaluate productivity.

  • A Salty Scientist says:

    This is a potentially confusing conflation of Investigator criterion with the assessment of productivity. Or maybe I'm not being clear about the use of productivity in the specific case of competing renewals (where a list of pubs produced is included and therefore becomes highly focal) vs generally when the PI is assessed?

    No, this is just me being a junior noob that has not yet submitted a renewal or served on an NIH panel.

  • drugmonkey says:

    I didn't emote about it. ...No need for anonymity here.....independent of how many tweets or likes I don't get.

    Are those supposed to be insults, Neil?

    Productivity should be about ACTUALLY ANSWERING THE QUESTION THAT YOU SET OUT TO SOLVE. Hardly anyone does this.

    Yes and no. I think there are indeed lots of people who try to fit setting out to solve a question that interests them into the realities of modern grant funded science careers. With varying results.

    And IME on grant review, this aspect of productivity does indeed get mentioned and I've taken up this issue in prior posts.

    Your Grant in Review: Competing Continuation, aka Renewal, Apps
    and
    Your Grant in Review: Productivity on Prior Awards

    I think I have seen just about every way you can imagine productivity used to assess a continuation/renewal grant. Naturally we are all very frustrated when the aspects on which we have done well are not sufficient to carry the day. Naturally we are all very frustrated when those irrelevant metrics favored by those idiot bean counters appear to cost us and benefit those other, clearly inferior applications.

    it is also fundamentally the most sound way to evaluate productivity.

    Wrong. Citations are a function of the size of the population of researchers working in a given area of research, which follows all sorts of trendiness at times. Some of this is the sort of technological bling you seem to abhor. Citatations can also be about fame and power more than real advance. There are issues in my field that are totally important but are seemingly of current interest to only a single lab or two. So in point of fact they might be expected to receive few citations but each paper is a damn gem of advance....because nobody else is doing it. Conversely, there are papers in well populated areas that get plenty of citations due to the mass of jack russells humping the same leg...but ultimately their advance is incremental and entirely replaceable.

    There is no such thing as "the most sound way" to evaluate productivity.

  • AH! says:

    A Nature paper revision doesn't cost the same as a J Neurophysiology revision, and it doesn't count the same in perceived productivity.

    Published JIF points per modular budget year is a more realistic metric of perceived productivity than papers per budget year.

    Perceived.

  • Neil Harrison says:

    I never insult anyone, although I am opinionated and straightforward in expressing my thoughts, so people do take offense at times. I am sure you can relate 🙂

    Your excellent point about citations and size/trendiness of the field is a fair one, but you miss the fact that within the context of a study section that represents a sub-field the playing field is usually level there, so it works.

    Your point about "niche" papers in areas that are sparsely populated is a very good one. We can all think of a few of those, that are "sleepers" but eventually they are massively cited.

    Saying flat out that I am "wrong" is a bit emotional, isn't it? My idea is quite reasonable. I do find it is a bit tiresome to be always shouted down by Prof., Dr., Mr., Ms. or Mrs. Shouty-Person, but I have had a lifetime of it, so one gets used to it, although it is why I avoid media.

    My proposal is largely the correct way to think about this issue, but there are always some additional criteria to consider, as you correctly point out. Ultimately reviewers have to gauge "impact" in the same way that the famous British judge once defined Obscenity. "I don't know how to define it, but I know it when I see it".

    😉

    Thanks for letting me contribute to this in a small way. I am going to vanish again now.

  • A Salty Scientist says:

    Conversely, there are papers in well populated areas that get plenty of citations due to the mass of jack russells humping the same leg...but ultimately their advance is incremental and entirely replaceable.

    Are you back to bashing glam again?

    Seriously though, it sounds like you are suggesting that reviewers follow a Potter Stewart test ("I know it when I see it") on what constitutes adequate productivity. Maybe a more charitable way to put it is 3 blind persons measuring an elephant. I fear that subjective criteria favor the "haves" and invite implicit biases against the "have nots."

  • Anonymous says:

    Potter Stewart! U.S. Supreme Court. Thanks for the correction....

    N.H.

  • drugmonkey says:

    but you miss the fact that within the context of a study section that represents a sub-field the playing field is usually level there

    not really. grants that use humans, NHPs, rats and mice are all reviewed together in many of the study sections on which I have participated and the citations differ tremendously on this alone. rodent researchers tend to ignore human and NHP data that are inconvenient to their attempts to claim their models are all that is needed. NHP researchers have their own little weirdnesses, not least of which is a chip on the shoulder about the aforementioned ignoring by rodent folks. human research avoids citing animal research where possible.

    Saying flat out that I am "wrong" is a bit emotional, isn't it?

    No.

    My idea is quite reasonable. I do find it is a bit tiresome to be always shouted down

    I said why I thought you were wrong and you agreed with me, at least in part. And yet you still feel compelled to take this de-legitimization strategy of claiming I am being emotional or shouting you down. You should probably think about why you take this approach to disagreement.

    the famous British judge
    Potter Stewart was a famous SCOTUS Justice.

    Thanks for letting me contribute to this in a small way.
    The true strength of this blog is always in the comments. Thanks for stopping by.

    Are you back to bashing glam again?

    That implies that I stopped so...? but no, the tendency for scientists to crowd around the same problem using the same techniques and approaches is not unique to Glam.

    Seriously though, it sounds like you are suggesting that reviewers follow a Potter Stewart test ("I know it when I see it") on what constitutes adequate productivity.

    I think we can all see where this leads to bias and subjectivity and resentment. Oh right. That IS what we have when it comes to comments on productivity in grant review.

    I fear that subjective criteria favor the "haves" and invite implicit biases against the "have nots."

    The self-reinforcing nature of what constitutes "productivity" is a danger and this is, imo, an argument for not defining only one single measure of productivity as the be-all. I would, however, like more explicit definition of the different aspects, if you see what I mean.

    I'm okay with people arguing over whether one Science paper equals X J Neuro papers. I'm okay with people arguing over whether the scientific advance in one paper from Professor Markov equals the six papers from Professor Li. And I'm okay with people arguing that the papers cover every dang thing proposed in the original application so it is more awesome than the one from the PI who only completed about half. But I'm not okay with panel review that says five papers from this grant are great productivity and 10 papers of equivalent scope, importance and impact are terrible productivity for this other grant. Consistency equals fairness. And for that, I believe, we need to be talking the same language and from the same space of consideration.

  • Ola says:

    Regardless JIF or whatever other glam metric you want to use, can we all just agree that >$6m direct costs in 6 years to a junior investigator is too fucking much? An average PI with a continuously funded modular R01 would survev for 25 years on that money! FFS NIH spread the love!

  • Grumpy says:

    Wait, 1 million dollars to do a revision?! Wtf is wrong with her field?!

    In my mind:
    If a paper is that deeply flawed then that is embarrassing. hey we all make mistakes, but definitely time to scrap it.

    If the 1M in requests is superfluous crap that some reviewer wants then tell them no.

    If editor is insisting, take it to the next glamor mag down the street.

    This level of kowtowing to referees/editors is absurd...I am not in a position to pass judgment on a $2.4M dc/yr scientist, but at some point we need to have the integrity to say no and use that 1M for the trainees' next paper(s).

  • drugmonkey says:

    can we all just agree that >$6m direct costs in 6 years to a junior investigator is too fucking much?

    No. Has it been used in a way that most people acknowledge is hot stuff science? If so, good for the junior investigator. Such a person is certainly no less deserving than anyone else.

  • Morgan Price says:

    The root of the problem of unrealistic expectations is the oversupply of proposals?

    PS Citation metrics are useless as indicators of impact for papers that are less than 3 years old. So, I don’t see how they can work for assessing renewals.

  • mH says:

    "There is just no comparing apples to oranges on paper count when you get into the heady atmosphere of a Glam lab."

    Part of the point of RCR and other attempts to measure productivity and impact is to query whether huge, expensive, glam papers and labs really provide additional benefit proportional to their much higher cost (from the funders points of view) or if it's just manufacturing prestige for star-belly sneetches. These are non-muutally exclusive. Funders also care about prestige and having stars to point at (all the better if they are nice young people and not middle aged sex creeps under investigation). I am sympathetic to the argument that there are things that don't come out in any metrics (and they don't) that might make this worthwhile. What I can't stand is the narrative that inevitably goes with it that vertical ascenders are carrying scientific discovery forward while the rest of NIH-land is in their wake, doing the controls they forgot to do and otherwise "filling in the details."

    So maybe it's worth it for a funder to have a small stable of pedigree show ponies and a much larger population of draft horses who do most of the work. But you can't rail against glam if that's the case, because it's the same damn thing. Glam is based on incentives that in large part are created by funders, and I think funders are correct and justified in considering whether it's an incentive they should provide given where the money is from and what is supposed to be for.

  • anonymous says:

    DM: This is obviously a difficult topic and as you point out there is no ideal way to do this. Thanks for your points, I think the ensuing discussion has been illuminating.

    At the end of the day there is always going to be some degree of balancing "quantity" against "quality". I agree completely that this has to be done consistently or not at all.

    The main issue I wanted to air is that many fields have senior scientists (Boomer generation) who have been funded repeatedly over many cycles, yet a really careful examination of the publication record reveals that they have NEVER actually solved the problem proposed. Although they publish, the papers are not really answering the question. Advances are mainly a question of incremental methodological advances that don't really move the field. These people suck up a lot of resources and just obstruct the process of genuine problem solving.

  • […] There is a growing disconnect between what is proposed in the average US National Institute of Health grant application and what it can actually pay to accomplish, argues pseudonymous biomedical researcher DrugMonkey. (DrugMonkey blog) […]

  • […] There is a growing disconnect between what is proposed in the average US National Institute of Health grant application and what it can actually pay to accomplish, argues pseudonymous biomedical researcher DrugMonkey. (DrugMonkey blog) […]

  • drugmonkey says:

    The main issue I wanted to air is that many fields have senior scientists (Boomer generation) who have been funded repeatedly over many cycles, yet a really careful examination of the publication record reveals that they have NEVER actually solved the problem proposed. Although they publish, the papers are not really answering the question. Advances are mainly a question of incremental methodological advances that don't really move the field. These people suck up a lot of resources and just obstruct the process of genuine problem solving.

    This, I hope you realize, is entirely in the eyes of the beholder. All of us can point to those guys over there who have never really done anything useful and suck up a lot of NIH resources. All of us who have been PIs (and I hope this is not news to you) have detractors who would point at our body of work and insist that we are tremendous wastes of resources who never do anything to solve anything really important.

    One person's shit mountain is another person's gold.

    The demonstrated strength of the NIH system is its breadth and diversity, its investigator initiated approach and the resulting tolerance it has for hits and misses.

    I may have my issues with Boomers but there are plenty of people in younger generations who will arrive at the end of their careers and have the *next* generation whining just as you are. This is a feature, not a bug. Top-down heavily controlled science that is dictated by the few is bound to be less effective. Because the few can (and are) wrong, blinded and biased.

  • anonymous reviewer says:

    "Wait, 1 million dollars to do a revision?! Wtf is wrong with her field?!"

    I can tell you how it happened in one instance for a glam journal where I happened to be the reviewer who demanded all the extra work. The initial submission was from a glam lab with a nicely crafted story that was certain to excite an editor, but the science was riddled with so many fatal methodological flaws that it was a completely unconvincing piece of garbage. I stated exactly what set of experiments would need to be done to convince me, which were so extensive that I expected the editors to reject it and for the authors to move on somewhere else. Instead, I got the revised manuscript back over a year later (and I would guess well over 1M was spent in the process). I'm shocked it wasn't done well the first time.

    I feel this is a common glam lab strategy. After your first try the editor will always give in to an appeal since you have such a potentially awesome and exciting story. Then it isn't too hard to beat your reviewers into submission since you now have a list of exactly what experiments and issues need to be taken care of to satisfy the eager editor. And you also need a million bucks.

  • ScienceNoob says:

    Relatively recent grad here (got my BS in biology in 2016). Bounced around between various ecological jobs, recently started in biomedical, intending on grad school very soon to study evolution of cancer suppression (peto's paradox kind of stuff). This piece and all of the comments (while some of it is a bit lost on me) has caused me some anxiety for beginning my career. I've known for years now that there's a lot of politics and other b.s. in science, but wow this is nauseating. I just want to make decent money, answer some interesting questions, and contribute to the bettering of society. Is that too much to ask? Should I get out now and find another career path before its too late?

  • drugmonkey says:

    I just want to make decent money, answer some interesting questions, and contribute to the bettering of society. Is that too much to ask? Should I get out now and find another career path before its too late?

    Without knowing your baseline understanding and your expectations it is hard to say. I would say that if anything you read on this blog about the careerist aspects of grant-funded academic science in the US (assuming this limitation generically applies to your plans) shocks and/or nauseates you that you may have some deeper thinking to do.

    "answer some interesting questions and contribute to bettering of society" is the easy part - it is hard to be even minimally viable in academic science and not have this be true, imo.

    "decent money"- most grad students get paid above minimum wage and postdocs upwards from there. some of us thought that was decent money during training. others complain vociferously that they are basically indentured servants. ymmv.

    "career path"- do you want to be an old skoole hard money supported Prof with a continuously and generously funded lab for a decades long career? This may not be high-probability. Do you want to be paid for your whole working life to do something atleast vaguely related to science? better odds.

Leave a Reply