Archive for the 'NIH Careerism' category

Repost- Your Grant in Review: Competing Continuation, aka Renewal, Apps

May 11 2018 Published by under Fixing the NIH, NIH, NIH Careerism

Two recent posts discuss the topic of stabilizing NIH funding within a PI's career, triggered by a blog post from Mike Lauer and Francis Collins. In the latter, the two NIH honchos claim to be losing sleep over the uncertainty of funding in the NIH extramural granting system, specifically in application to those PIs who received funding as an ESI and are now trying to secure the next round of funding.

One key part of this, in my view, is how they (the NIH) and we (extramural researchers, particularly those reviewing applications for the NIH) think about the proper review of Renewal (formerly known as competing continuation) applications. I'm reposting some thoughts I had on this topic for your consideration.

This post originally appeared Jan 28, 2016.
___
In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

No responses yet

Stability of funding versus the project-based funding model of the NIH

May 09 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

In response to a prior post, Morgan Price wonders about the apparent contrast of NIH's recent goal to stabilize research funding and the supposed "project-based" model.

I don't see how stability based funding is consistent with project-based funding and "funding the best science". It would be a radical change...?

NIH grants are supposed to be selected and awarded on the basis of the specific project that is proposed. That is why there is such extensive detailing of a very specific area of science, well specified Specific (not General!) Aims and a listing of specific experiments.

They are not awarded on the basis of a general program of research that seems to be promising for continued funding.

Note that there are indeed mechanisms of funding that operate on the program level to much greater extent. HHMI being one of the more famous ones of these. In program based award, the emphasis is on what the investigating team (and generally this means specifically the PI) has accomplished and published in recent years. There may be some hints about what the person plans to work on next but generally the emphasis is on past performance, rather than the specific nature of the future plan.

In the recent handwringing from NIH about how investigators that they have launched with special consideration for their newcomer status (e.g., the Early Stage Investigator PI applications can be funded at lower priority scores / percentile ranks than would be needed by an established investigator.

if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere.

"Stabilizing", Morgan Price assumes is the same thing as a radical change. It is not.

Here's the trick:

The NIH funding system has always been a hybrid which pays lip service to "project based funding" as a model while blithely using substantial, but variable, input from the "program based" logic. First off, the "Investigator" criterion of proposal review is one of 5 supposedly co-equal major criteria. The Biosketch, which details the past accomplishments and skills of the PI) is prominent in the application. This Biosketch lists both papers and prior research grant support* which inevitably leads to some degree of assessment of how productive the PI was with her prior awards. This then is used to judge the merit of the proposal that is under current review - sounds just a bit like HHMI, doesn't it?

The competing continuation application (called a Renewal application now) is another NIH beast that reveals the hybrid nature of the selection system. You are allowed to ask for no more than 5 years of support for a given project, but you can then ask for successive five year extensions via competitive application review. This type of proposal has a "Progress Report" and a list of papers resulting from the project required within the application. This, quite obviously, focuses the review in large part on the past accomplishment. Now, sure, the application also has to have a detailed proposal for the next interval. Specific Aims. Experiments listed. But it also has all of the prior accomplishments pushed into the center of the review.

So what is the problem? Why are Collins and Lauer proposing to make the NIH grant selection even more based on the research program? Well, times have changed. The figure here is a bit dated by now but I like to keep refreshing your view of it because NIH has this nasty tendency to truncate their graphs to only the past decade or so. The NIH does this to obscure just how good investigators had things in the 80s. That was when established investigators enjoyed success rates north of 40%. For all applications, not just for competing renewals. Many of the people who started their careers in those wonderful days are still very much with us, by the way. This graph shows that within a few years of the end of the doubling, the success rates for established investigators had dropped to about where the new investigators were in the 1980s. Success rates have only continued to get worse but thanks to policies enacted by Zerhouni, the established and new investigator success rates have been almost identical since 2007.
Interestingly, one of the things Zerhouni had to do was to insist that Program change their exception pay behavior. (This graph was recreated from a GAO report [PDF], page down to Page 56, PDF page 60.) It is relevant because it points to yet another way that the NIH system used to prioritize program qualities over the project qualities. POs historically were much more interested in "saving" previously funded, now unfunded, labs than they were in saving not-yet-funded labs.

Now we get to Morgan Price's point about "the best science". Should the NIH system be purely project-based? Can we get the best science one 5 year plan at a time?

I say no. Five years is not enough time to spool up a project of any heft into a well honed and highly productive gig. Successful intervals of 5 year grants depend on what has come before to a very large extent. Often times, adding the next 5 years of funding via Renewal leads to an even more productive time because it leverages what has come before. Stepping back a little bit, gaps in funding can be deadly for a project. A project that has been killed off just as it is getting good is not only not the "best" science it is hindered science. A lack of stability across the NIH system has the effect of making all of its work even more expensive because something headed off in Lab 1 (due to gaps in funding) can only be started up in Lab 2 at a handicap. Sure Lab 2 can leverage published results of Lab 1 but not the unpublished stuff and not all of the various forms of expertise locked up in the Lab 1 staff's heads.

Of course if too much of the NIH allocation goes to sinecure program-based funding to continue long-running research programs, this leads to another kind of inefficiency. The inefficiencies of opportunity cost, stagnation, inflexibility and dead-woodery.

So there is a balance. Which no doubt fails to satisfy most everyone's preferences.

Collins and Lauer propose to do a bit of re-balancing of the program-based versus project-based relationship, particularly when it comes to younger investigators. This is not radical change. It might even be viewed in part as a selective restoration of past realities of grant funded science careers.

__
*In theory the PI's grants are listed on the Biosketch merely to show the PI is capable of leading a project something like the one under review. Correspondingly, it would in theory be okay to just list the most successful ones and leave out the grant awards with under-impressive outcomes. After all, do you have to put in every paper? no. Do you have to put every bit of bad data that you thought might be preliminary data into the app? no. So why do you have to** list all of your grants? This is the program-based aspects of the system at work.

**dude, you have to. this is one of those culture of review things. You will be looked up on RePORTER and woe be to you if you try to hide some project, successful or not, that has active funding within the past three years.

14 responses so far

Addressing the Insomnia of Francis Collins and Mike Lauer

The Director of the NIH and the Deputy Director in charge of the office of extramural research have posted a blog post about The Issue that Keeps Us Awake at Night. It is the plight of the young investigator, going from what they have written.


The Working Group is also wrestling with the issue that keeps us awake at night – considering how to make well-informed strategic investment decisions to nurture and further diversify the biomedical research workforce in an environment filled with high-stakes opportunity costs. If we are going to support more promising early career investigators, and if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere. That will likely mean some belt-tightening in other quarters, which is rarely welcomed by the those whose belts are being taken in by a notch or two.

They plan to address this by relying on data and reports that are currently being generated. I suspect this will not be enough to address their goal.

I recently posted a link to the NIH summary of their history of trying to address the smooth transition of newly minted PIs into NIH-grant funded laboratories, without much comment. Most of my Readers are probably aware by now that handwringing from the NIH about the fate of new investigators has been an occasional feature since at least the Johnson Administration. The historical website details the most well known attempts to fix the problem. From the R23 to the R29 FIRST to the New Investigator check box, to the "sudden realization"* they needed to invent a true Noob New Investigator (ESI) category, to the latest designation of the aforementioned ESIs as Early Established Investigators for continued breaks and affirmative action. It should be obvious from the ongoing reinvention of the wheel that the NIH periodically recognizes that the most recent fix isn't working (and may have unintended detrimental consequences).

One of the reasons these attempts never truly work and have to be adjusted or scrapped and replaced by the next fun new attempt was identified by Zerhouni (a prior NIH Director) in about 2007. This was right after the "sudden realization" and the invention of the ESI. Zerhouni was quoted in a Science news bit as saying that study sections were responding to the ESI special payline boost by handing out ever worsening scores to the ESI applications.

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

Now, I would argue that viewing this trend of worsening scores as "punishing" is at best only partially correct. We can broaden this to incorporate a simple appreciation that study sections adapt their biases, preferences and evolved cultural ideas about grant review to the extant rules. One way to view worsening ESI scores may have to do with the pronounced tendency reviewers have to think in terms of fund it / don't fund it, despite the fact that SROs regularly exhort them not to do this. When I was on study section regularly, the scores tended to pile up around the perceived payline. I've seen the data for one section across multiple rounds. Reviewers were pretty sensitive to the scuttlebutt about what sort of score was going to be a fundable one. So it would be no surprise whatsoever to me if there was a bias driven by this tendency, once it was announced that ESI applications would get a special (higher) payline for funding.

This tendency might also be driven in part by a "Get in line, youngun, don't get too big for your britches" phenomenon. I've written about this tendency a time or two. I came up as a postdoc towards the end of the R29 / FIRST award era and got a very explicit understanding that some established PIs thought that newbies had to get the R29 award as their first award. Presumably there was a worsening bias against giving out an R01 to a newly minted assistant professor as their first award**, because hey, the R29 was literally the FIRST award, amirite?

sigh.

Then we come to hazing, which is the even nastier relative of the "Don't get to big for your britches". Oh, nobody will admit that it is hazing, but there is definitely a subcurrent of this in the review behavior of some people that think that noob PIs have to prove their worth by battling the system. If they sustain the effort to keep coming back with improved versions, then hey, join the club kiddo! (Here's an ice pack for the bruising). If the PI can't sustain the effort to submit a bunch of revisions and new attempts, hey, she doesn't really have what it takes, right? Ugh.

Scientific gate-keeping. This tends to cover a multitude of sins of various severity but there are definitely reviewers that want newcomers to their field to prove that they belong. Is this person really an alcohol researcher? Or is she just going to take our*** money and run away to do whatever basic science amazeballs sounded super innovative to the panel?

Career gate-keeping. We've gone many rounds on this one within the science blog- and twittospheres. Who "deserves" a grant? Well, reviewers have opinions and biases and despite their best intentions and wounded protestations...these attitudes affect review. In no particular order we can run down the favorite targets of the "Do it to Julia, not me, JULIA!" sentiment. Soft money job categories. High overhead Universities. Well funded labs. Translational research taking all the money away from good honest basic researchers***. Elite coastal Universities. Big Universities. R1s. The post-normative-retirement crowd. Riff-raff plodders.

Layered over the top of this is favoritism. It interacts with all of the above, of course. If some category of PI is to be discriminated against, there is very likely someone getting the benefit. The category of which people approve. Our club. Our kind. People who we like who must be allowed to keep their funding first, before we let some newbie get any sniff of a grant.

This, btw, is a place where the focus must land squarely on Program Officers as well. The POs have all the same biases mentioned above, of course. And their versions of the biases have meaningful impact. But when it comes to thought of "we must save our long term investigators" they have a very special role to play in this debacle. If they are not on board with the ESI worries that keep Collins and Lauer awake at night, well, they are ideally situated to sabotage the effort. Consciously or not.

So, Director Collins and Deputy Director Lauer, you have to fix study section and you have to fix Program if you expect to have any sort of lasting change.

I have only a few suggestions and none of this is a silver bullet.

I remain convinced that the only tried and true method to minimize the effects of biases (covert and overt) is the competition of opposing biases. I've remarked frequently that study sections would be improved and fairer if less-experienced investigators had more power. I think the purge of Assistant Professors effected by the last head of the CSR (Scarpa) was a mistake. I note that CSR is charged with balancing study sections on geography, sex, ethnicity, university type and even scientific subdomains...while explicitly discriminating against younger investigators. Is it any wonder if there is a problem getting the newcomers funded?

I suggest you also pay attention to fairness. I know you won't, because administrators invariably respond to a situation of perceived past injustice with "ok, that was the past and we can't do anything about it, moving forward please!". But this is going to limit your ability to shift the needle. People may not agree on what represents fair treatment but they sure as heck are motivated by fairness. Their perception of whether a new initiative is fair or unfair will tend to shape their behavior when reviewing. This can get in the way of NIH's new agenda if reviewers perceive themselves as being mistreated by it.

Many of the above mentioned reviewer quirks are hardened by acculturation. PIs who are asked to serve on study section have been through the study section wringer as newbies. They are susceptible to the idea that it is fair if the next generation has it just about as hard as they did and that it is unfair if newbies these days are given a cake walk. Particularly, if said established investigators feel like they are still struggling. Ahem. It may not seem logical but it is simple psychology. I anticipate that the "Early Established Investigator" category is going to suffer the same fate as the ESI category. Scores will worsen, compared to pre-EEI days. Some of this will be the previously mentioned tracking of scores to the perceived payline. But some of this will be people**** who missed the ESI assistance who feel that it is unfair that the generation behind them gets yet another handout to go along with the K99/R00 and ESI plums. The intent to stabilize the careers of established investigators is a good one. But limiting this to "early" established investigators, i.e., those who already enjoyed the ESI era, is a serious mistake.

I think Lauer is either aware, or verging on awareness, of something that I've mentioned repeatedly on this blog. I.e. that a lot of the pressure on the grant system- increasing numbers of applications, PIs seemingly applying greedily for grants when already well funded, they revision queuing traffic pattern hold - comes from a vicious cycle of the attempt to maintain stable funding. When, as a VeryEstablished colleague put it to me suprisingly recently "I just put in a grant when I need another one and it gets funded" is the expected value, PIs can be efficient with their grant behavior. If they need to put in eight proposals to have a decent chance of one landing, they do that. And if they need to start submitting apps 2 years before they "need" one, the randomness is going to mean they seem overfunded now and again. This applies to everyone all across the NIH system. Thinking that it is only those on their second round of funding that have this stability problem is a huge mistake for Lauer and Collins to be making. And if you stabilize some at the expense of others, this will not be viewed as fair. It will not be viewed as shared pain.

If you can't get more people on board with a mission of shared sacrifice, or unshared sacrifice for that matter, then I believe NIH will continue to wring its hands about the fate of new investigators for another forty years. There are too many applicants for too few funds. It amps up the desperation and amps up the biases for and against. It decreases the resistance of peer reviewers to do anything to Julia that they expect might give a tiny boost to the applications of them and theirs. You cannot say "do better" and expect reviewers to change, when the power of the grant game contingencies is so overwhelming for most of us. You cannot expect program officers who still to this day appear entirely clueless about they way things really work in extramural grant-funded careers to suddenly do better because you are losing sleep. You need to delve into these psychologies and biases and cultures and actually address them.

I'll leave you with an exhortation to walk the earth, like Caine. I've had the opportunity to watch some administrative frustration, inability and nervousness verging on panic in the past couple of years that has brought me to a realization. Management needs to talk to the humblest of their workforce instead of the upper crust. In the case of the NIH, you need to stop convening preening symposia from the usual suspects, taking the calls of your GlamHound buddies and responding only to reps of learn-ed societies. Walk the earth. Talk to real applicants. Get CSR to identify some of your most frustrated applicants and see what is making them fail. Find out which of the apparently well-funded applicants have to work their tails off to maintain funding. Compare and contrast to prior eras. Ask everyone what it would take to Fix the NIH.

Of course this will make things harder for you in the short term. Everyone perceives the RealProblem as that guy, over there. And the solutions that will FixTheNIH are whatever makes their own situation easier.

But I think you need to hear this. You need to hear the desperation and the desire most of us have simply to do our jobs. You need to hear just how deeply broken the NIH award system is for everyone, not just the ESI and EEI category.

PS. How's it going solving the problem identified by Ginther? We haven't seen any data lately but at last check everything was as bad as ever so...

PPS. Are you just not approving comments on your blog? Or is this a third rail issue nobody wants to comment on?
__
*I make fun of the "sudden realization" because it took me about 2 h of my very first study section meeting ever to realize that "New Investigator" checkbox applicants from genuine newbies did very poorly and all of these were being scooped up by very well established and accomplished investigators who simply hadn't been NIH funded. Perhaps they were from foreign institutions, now hired in the US. Or perhaps lived on NSF or CDC or DOD awards. The idea that it took NIH something like 8-10 years to realize this is difficult to stomach.

**The R29 was crippled in terms of budget, btw. and had other interesting features.

***lolsob

****Yep, that would be my demographic.

12 responses so far

NIH's long sordid history of failing to launch new investigators fairly and cleanly

May 03 2018 Published by under Fixing the NIH, NIH, NIH Careerism

Actually, they call it "A History of Commitment"

It starts with the launch of the R23 in 1977, covers the invention and elimination of the R29 FIRST and goes all the way to the 2017 announcement that prior ESI still need help, this time for their second and third rounds of funding as "Early Established Investigators".

pssst, guys. FIX STUDY SECTIONS and PO BEHAVIOR.

Updated to add:
Mike Lauer is wringing his hands on the blog about The Issue that (allegedly) keeps us (NIH officialdom) awake at night [needs citation].

We pledge to do everything we can to incorporate those recommendations, along with those of the NASEM panel, in our ongoing efforts to design, test, implement, and evaluate policies that will assure the success of the next generation of talented biomedical researchers.

5 responses so far

NIH reminds Universities not to keep paying harasser PIs from grant funds while suspended

On the May 1, 2018 the NIH issued NOT-OD-18-172 to clarify that:

NIH seeks to remind the extramural community that prior approval is required anytime there is a change in status of the PD/PI or other senior/key personnel where that change will impact his/her ability to carry out the approved research at the location of, and on behalf of, the recipient institution. In particular, changes in status of the PI or other senior/key personnel requiring prior approval would include restrictions that the institution imposes on such individuals after the time of award, including but not limited to any restrictions on access to the institution or to the institution’s resources, or changes in their (employment or leave) status at the institution. These changes may impact the ability of the PD/PI or other senior/key personnel to effectively contribute to the project as described in the application; therefore, NIH prior approval is necessary to ensure that the changes are acceptable.

Hard on the heels of the news breaking about long term and very well-funded NIH grant Principal Investigators Thomas Jessel and Inder Verma being suspended from duties at Columbia University and The Salk Institute for Biological Studies, respectively, one cannot help but draw the obvious conclusion.

I don't know what prompted this Notice but I welcome it.

Now, I realize that many of us would prefer to see some harsher stuff here. Changing the PI of a grant still keeps the sweet sweet indirects flowing into the University or Institute. So there is really no punishment when an applicant institution is proven to have looked the other way for years (decades) when their well-funded PIs are accused repeatedly of sexual harassment, gender-based discrimination, retaliation on whistleblowers and the like.

But this Notice is still welcome. It indicates that perhaps someone is actually paying a tiny little bit of attention now in this post-Weinstein era.

3 responses so far

Question of the Day

How do you assess whether you are too biased about a professional colleague and/or their work?

In the sense that you would self-elect out of reviewing either their manuscripts for publication or their grant applications.

Does your threshold differ for papers versus grants?

Do you distinguish between antipathy bias and sympathy bias?

8 responses so far

Delay, delay, delay

I'm not in favor of policies that extend the training intervals. Pub requirements for grad students is a prime example. The "need" to do two 3-5 year postdocs to be competitive. These are mostly problems made by the Professortariat directly.

But NIH has slipped into this game. Postdocs "have" to get evidence of funding, with F32 NRSAs and above all else the K99 featuring as top plums.

Unsurprisingly the competition has become fierce for these awards. And as with R-mechs this turns into the traffic pattern queue of revision rounds. Eighteen months from first submission to award if you are lucky.

Then we have the occasional NIH Institute which adds additional delaying tactics. "Well, we might fund your training award next round, kid. Give it another six months of fingernail biting."

We had a recent case on the twttrs where a hugely promising young researcher gave up on this waiting game, took a job in home country only to get notice that the K99 would fund. Too late! We (MAGA) lost them.

I want NIH to adopt a "one and done" policy for all training mechanisms. If you get out-competed for one, move along to the next stage.

This will decrease the inhumane waiting game. It will hopefully open up other opportunities (transition to quasi-faculty positions that allow R-mech or foundation applications) faster. And overall speed progress through the stages, yes even to the realization that an alternate path is the right path.

29 responses so far

Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

Mar 13 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding, Peer Review

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I'd like to see more chit chat on the implicit question from the last post.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

__
*ok, maybe sometimes. but always?

13 responses so far

What does it mean if a miserly PI won't pay for prospective postdoc visits?

Feb 20 2018 Published by under Careerism, NIH Careerism

It is indubitably better for the postdoctoral training stint if the prospective candidate visits the laboratory before either side commits. The prospective gets a chance to see the physical resources, gets a chance for very specific and focused time with the PI and above all else, gets a chance to chat with the lab's members.

The PI gets a better opportunity to suss out strengths and weaknesses of the candidate, as do the existing lab members. Sometimes the latter can sniff things out that the prospective candidate does not express in the presence of the PI.

These are all good things and if you prospective trainees are able to visit a prospective training lab it is wise to take advantage.

If memory serves the triggering twittscussion for this post started with the issue of delayed reimbursement of travel and the difficulty some trainees have in floating expenses of such travel until the University manages to cut a reimbursement check. This is absolutely an important issue, but it is not my topic for today.

The discussion quickly went in another direction, i.e. if it is meaningful to the trainee if the PI "won't pay for the prospective to visit". The implication being that if a PI "won't" fly you out for a visit to the laboratory, this is a bad sign for the future training experience and of course all prospectives should strike that PI off their list.

This perspective was expressed by both established faculty and apparent trainees so it has currency in many stages of the training process from trainee to trainer.

It is underinformed.

I put "won't" in quotes above for a reason.

In many situations the PI simply cannot pay for travel visits for recruiting postdocs.

They may appear to be heavily larded with NIH research grants and still do not have the ability to pay for visits. This is, in the experience of me and others chiming in on the Twitts, because our institutional grants management folks tell us it is against the NIH rules. There emerged some debate about whether this is true or whether said bean counters are making an excuse for their own internal rulemaking. But for the main issue today, this is beside the point.

Some PIs cannot pay for recruitment travel from their NIH R01(s).

Not "won't". Cannot. Now as to whether this is meaningful for the training environment, the prospective candidate will have to decide for herself. But this is some fourth level stuff, IMO. PIs who have grants management which works at every turn to free them from rules are probably happier than those that have local institutional policies that frustrate them. And as I said at the top, it is better, all else equal, when postdocs can be consistently recruited with laboratory visits. But is the nature of the institutional interpretation of NIH spending rules a large factor against the offerings of the scientific training in that lab? I would think it is a very minor part of the puzzle.

There is another category of "cannot" which applies semi-independently of the NIH rule interpretation- the PI may simply not have the cash. Due to lack of a grant or lack of a non-Federal pot of funds, the PI may be unable to spend in the recruiting category even if other PIs at the institution can do so. Are these meaningful to the prospective? Well the lack of a grant should be. I think most prospectives that seek advice about finding a lab will be told to check into the research funding. It is kind of critical that there be enough for whatever the trainee wants to accomplish. The issue of slush funds is a bit more subtle but sure, it matters. A PI with grants and copious slush fundes may offer a better resourced training environment. Trouble is, that this comes with other correlated factors of importance. Bigger lab, more important jet-setting PI...these are going to be more likely to have extra resources. So it comes back to the usual trade-offs and considerations. In the face of that it is unclear that the ability to pay for recruiting is a deciding factor. It is already correlated with other considerations the prospective is wrestling with.

Finally we get to actual "will not". There are going to be situations where the PI has the ability to pay for the visit but chooses not to. Perhaps she has a policy never to do so. Perhaps he only pays for the top candidates because they are so desired. Perhaps she does this for candidates when there are no postdocs in the lab but not when there are three already on board. Or perhaps he doesn't do it anymore because the last three visitors failed to join the lab*.

Are those bad reasons? Are they reasons that tell the prospective postdoc anything about the quality of the future training interaction?

__
*Extra credit: Is it meaningful if the prospective postdoc realizes that she is fourth in line, only having been invited to join the lab after three other people passed on the opportunity?

4 responses so far

NIH encourages pre-prints

In March of 2017 the NIH issued a notice on Reporting Preprints and Other Interim Research Products (NOT-OD-17-050): "The NIH encourages investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work.".

The key bits:

Interim Research Products are complete, public research products that are not final.

A common form is the preprint, which is a complete and public draft of a scientific document. Preprints are typically unreviewed manuscripts written in the style of a peer-reviewed journal article. Scientists issue preprints to speed dissemination, establish priority, obtain feedback, and offset publication bias.

Another common type of interim product is a preregistered protocol, where a scientist publicly declares key elements of their research protocol in advance. Preregistration can help scientists enhance the rigor of their work.

I am still not happy about the reason this happened (i.e., Glam hounds trying to assert scientific priority in the face of the Glam Chase disaster they themselves created) but this is now totally beside the point.

The NIH policy (see OpenMike blog entry for more) has several implications for grant seekers and grant holders which are what form the critical information for your consideration, Dear Reader.

I will limit myself here to materials that are related to standard paper publishing. There are also implications for materials that would never be published (computer code?) but that is beyond the scope for today's discussion.

At this point I will direct you to bioRxiv and PsyRxiv if you are unfamiliar with some of the more popular approaches for pre-print publication of research manuscripts.

The advantages to depositing your manuscripts in a pre-print form are all about priority and productivity, in my totally not humble opinion. The former is why the Glamour folks are all a-lather but priority and scooping affect all of us a little differently. As most of you know, scooping and priority is not a huge part of my professional life but all things equal, it's better to get your priority on record. In some areas of science it is career making/breaking and grant getting/rejecting to establish scientific priority. So if this is a thing for your life, this new policy allows and encourages you to take advantage.

I'm more focused on productivity. First, this is an advantage for trainees. We've discussed the tendency of new scientists to list manuscripts "in preparation" on their CV or Biosketch (for fellowship applications, say, despite it being technically illegal). This designation is hard to evaluate. A nearing-defense grad student who has three "in prep" manuscripts listed on the CV can appear to be bullshitting you. I always caution people that if they list such things they had better be prepared to send a prospective post-doc supervisor a mostly-complete draft. Well, now the pre-print allows anyone to post "in preparation" drafts so that anyone can verify the status. Very helpful for graduate students who have a short timeline versus the all too typical cycle of submission/rejection/resubmission/revision, etc. More importantly, the NIH previously frowned on listing "in preparation" or "in review" items on the Biosketch. This was never going to result in an application being returned unreviewed but it could sour the reviewers. And of course any rule followers out there would simply not list any such items, even if there was a minor revision being considered. With pre-print deposition and the ability to list on a NIH biosketch and cite in the Research Plan there is no longer any vaporware type of situation. The reviewer can look at the pre-print and judge the science for herself.

This applies to junior PIs as well. Most likely, junior PIs will have fewer publications, particularly from their brand new startup labs. The ability of the PI to generate data from her new independent lab can be a key issue in grant review. As with the trainee, the cycle of manuscript review and acceptance is lengthy compared with the typical tenure clock. And of course many junior PIs are trying to balance JIF/Glam against this evidence of independent productivity. So pre-print deposition helps here.

A very similar situation can apply to us not-so-junior PIs who are proposing research in a new direction. Sure, there is room for preliminary data in a grant application but the ability to submit data in manuscript format to the bioRxiv or some such is unlimited! Awesome, right?

15 responses so far

Older posts »