Archive for the 'NIH Careerism' category

Reminder: The purpose of NIH grant review is not to help out the applicant with kindness

Oct 06 2016 Published by under Grant Review, NIH, NIH Careerism

The reviewers of NIH grant applications are charged with helping the Program staff of the relevant Institute or Center of the NIH decide on relative merits of applications as they, the Program staff, consider which ones to select for funding.

Period.

They are not charged with trying to help the PI improve his or her grantspersonship*. They are not charged with helping the PI get this particular grant funded on revision. They are not charged with being kind or nice to the PI. They are not charged with saving someone's career.

They are not charged with deciding what grants to fund!

If they can also be kind, help the PI improve her grant for next time, help her improve her grantsmithing in general and/or in passing save someone's career, hey great. Bonus. Perfectly acceptable outcome of the process.

But if the desire to accomplish any of these things compromise the assessment of merit** in a way that serves the needs of the Program staff**, that reviewer is screwing up.

__
*Maybe start a blog if this is your compulsion? I've heard that works for some people who have such urges.

**"merit" in this context is not necessarily what any given reviewer happens to think it is a priori, either. For example, there could be a highly targeted funding opportunity with stated goals that a given reviewer doesn't really agree with. IMV, that reviewer is screwing up if she substitutes her goals for the goals expressed by the I or C in the funding opportunity announcement.

14 responses so far

NIH always jukes the stats in their favor

Oct 04 2016 Published by under Gender, NIH, NIH Careerism, NIH funding

DataHound requested information on submissions and awards for the baby MIRA program from NIGMS. His first post noted what he considered to be a surprising number of applications rejected prior to review. The second post identifies what appears to be a disparity in success for applicants who identify as Asian* compared with those who identify white.

The differences between the White and Asian results are striking. The difference between the success rates (33.8% versus. 18.4%) is statistically significant with a p value of 0.006. The difference between the the all applications success rate (29.4% versus 13.2%) is also statistically significant with a p value of 0.0008. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) is statistically significant with p = 0.007.

There was also a potential sign of a disparity for applicants that identify as female versus male.

Male: Success rate = 28.9%, Probability of administrative rejection = 21.0%, All applications success rate = 22.8%

Female: Success rate = 23.2%, Probability of administrative rejection = 21.1%, All applications success rate = 18.3%

Although these results are not statistically significant, the first two parameters trend in favor of males over females. If these percentages persisted in larger sample sizes, they could become significant.

Same old, same old. Right? No matter what aspect of the NIH grant award we are talking about, men and white people always do better than women and non-white people.

The man-bites-dog part of the tale involves what NIGMS published on their blog about this.

Basson, Preuss and Lorsch report on the Feedback Loop blog entry dated 9/30/2016 that:

One step in this effort is to make sure that existing skews in the system are not exacerbated during the MIRA selection process. To assess this, we compared the gender, race/ethnicity and age of those MIRA applicants who received an award with those of the applicants who did not receive an award
...
We did not observe any significant differences in the gender or race/ethnicity distributions of the MIRA grantees as compared to the MIRA applicants who did not receive an award. Both groups were roughly 25% female and included ≤10% of underrepresented racial/ethnic groups. These proportions were also not significantly different from those of the new and early stage R01 grantees. Thus although the MIRA selection process did not yet enhance these aspects of the diversity of the awardee pool relative to the other groups of grantees, it also did not exacerbate the existing skewed distribution.

Hard to reconcile with DataHound's report which comes from data requested under FOIA, so I presume it is accurate. Oh, and despite small numbers of "Others"* DataHound also noted:

The differences between the White and Other category results are less pronounced but also favored White applicants. The difference between the success rates (33.8% versus. 21.1%) is not statistically significant although it is close with a p value of 0.066. The difference between the the all applications success rate (29.4% versus 16.2%) is statistically significant with a p value of 0.004. Finally, the difference between the probabilities of administrative rejection (15.4% versus 28.1%) not statistically significant with p = 0.14 although the trend favors White applicants.

Not sure how NIGMS will choose to weasel out of being caught in a functional falsehood. Perhaps "did not observe" means "we took a cursory look and decided it was close enough for government work". Perhaps they are relying on the fact that the gender effects were not statistically significant, as DataHound noted. Women PIs were 19 out of 82 (23.2%) of the funded and 63/218 (28.9%) of the reviewed-but-rejected apps. This is not the way DataHound calculated success rate, I believe, but because by chance there were 63 female apps reviewed-but-rejected and 63 male apps awarded funding the math works out the same.

There appears to be no excuse whatever for the NIGMS team missing the disparity for Asian PIs.

The probability of administrative rejection really requires some investigation on the part of NIGMS. Because this would appear to be a huge miscommunication, even if we do not know where to place the blame for the breakdown. If I were NIGMS honchodom, I'd be moving mountains to make sure that POs were communicating the goals of various FOA fairly and equivalently to every PI who contacted them.

Related Reading.
__
*A small number of applications for this program (403 were submitted, per DataHound's first post) means that there were insufficient numbers of applicants from other racial/ethnic categories to get much in the way of specific numbers. The NIH has rules (or possibly these are general FOIA rules) about reporting on cells that contain too few PIs...something about being able to identify them too directly.

19 responses so far

Responding to Targeted NIH Grant Funding Opportunity Announcements

Sep 29 2016 Published by under Careerism, Grant Review, NIH, NIH Careerism

The NIH FOAs come in many flavors of specificity. Some, usually Program Announcements, are very broad and appear to permit a wide range of applications to fit within them. My favorite example of this is NIDA's "Neuroscience Research on Drug Abuse" PA.

They also come in highly specific varieties, generally as RFAs.

The targeted FOA is my topic for the day because they can be frustrating in the extreme. No matter how finely described for the type, these FOA are inevitably too broad to let each and every interested PI know exactly how to craft her application. Or, more importantly, whether to bother. There is always a scientific contact, a Program Officer, listed so the first thing to do is email or call this person. This can also be frustrating. Sometimes one gets great advice, sometimes it is perplexing.

As always, I can only offer up the way I look at these things.

As an applicant PI facing an FOA that seems vaguely of interest to me, I have several variables that are at play. First, despite the fact that Program may have written the FOA in a particular way, this doesn't mean that they really know what they want. The FOA language may be a committee result or it may just not have been thought that a highly specific type of proposal was necessary to satisfy what goals and motivations existed.

Second, even if they do know what they want in Programville, peer review is always the primary driver. If you can't escape triage it is highly unlikely that Program will fund your application, even if it fits their intent to a T. So as the applicant PI, I have to consider how peers are likely to interpret the FOA and how they are likely to apply it to my application. It is not impossible that the advice and perspective given to the prospective PI by the contact PO flies rather severely in the face of that PIs best estimate of what is likely to occur during peer review. This leaves a conundrum.

How to best navigate peer review and also serve up a proposal that is attractive to Program, in case they are looking to reach down out of the order of review for a proposal that matches what they want.

Finally, as I mention now and again there is an advocacy role for the PI when applying for NIH funding. It is part and parcel of the job of the PI to tell Program what they should be funding. By, of course, serving up such a brilliantly argued application that they see that your take on their FOA is the best take. Even if this may not have been what was their intent in the first place. This also, btw, applies to the study section members. Your job is in part to convince them, not to meet whatever their preconceptions or reading of the FOA might be.

Somehow, the PI has to stew all of these considerations together and come up with a plan for the best possible proposal. Unfortunately, you can miss the mark. Not because your application is necessarily weak or your work doesn't fit the FOA in some objective sense. Merely because you have decided to make choices, gambles and interpretations that have led you in a particular direction, which may very well be the "wrong" direction.

Most severely, you might be rejected without review. This can happen. If you do not meet the PO's idea of being within the necessary scope of what they would ever plan to fund, no matter the score, you could have your application prevented from being routed to the study section.

Alternately, you might get triaged by a panel that just doesn't see it your way. That wonders if you, the idiot PI, was reading the same FOA that they are. It happens.

Finally, you might get a good score and Program may decide to skip over it for lack of responsiveness to their intent. Or you may be in the grey zone and fail to get a pickup because other grants scoring below yours are deemed closer to what they want to fund.

My point for today is that I think this is necessary error in the system. It is not evidence of a wholesale problem with the NIH FOA approach if you shoot wide to the left. If you fail to really understand the intent of the FOA as written. Or if you come away from your initial chat with the PO with a misguided understanding. Or even if you run into the buzzsaw of a review panel that rebels against the FOA.

Personally, I think you just have to take your chances. Arrive at your best understanding of what the FOA intends and how the POs are going to interpret various proposals. Sure. And craft your application accordingly. But you have to realize that you may be missing the point entirely. You may fail to convince anyone of your brilliant take on the FOA's stated goals. This doesn't mean the system is broken.

So take your shots. Offer up your best interpretation on how to address the goals. And then bear down and find the next FOA and work on that. In case your first shot sails over the crossbar.

__
It always fascinates me how fairly wide-flung experiences with NIH funding coalesce around the same issue sometimes. This particular post was motivated by no less than three situations being brought to my attention in the past week. Different ICs, different FOA, different mechanisms and vastly different topics and IC intentions. But to me, the answers are the same.

12 responses so far

More evidence of the generational screw job in NIH grant award

Sep 02 2016 Published by under NIH, NIH Budgets and Economics, NIH Careerism

ScienceHound has posted a new analysis related to the NIH budget and award policy. He's been beavering away with mathematical models lately that are generally going to be beyond my ability to understand. In a tweet however, he made it pretty clear.

As expanded in his blog post:

The largest difference between the curves occurs at the beginning of the doubling period (1998-2003) where the model predicts a large increase in the number of grants that was not observed. This is due to the fact that NIH initiated a number of larger non–RPG-based programs when substantial new funding was available rather than simply funding more RPGs (although they did this to some extent). For example, in 1998, NIH invested $17 million through the Specialized Center–Cooperative Agreements (U54) mechanism. This grew to $146 million in 1999, $188 million in 2000, $298 million in 2001, $336 million in 2002, and $396 million in 2003. Note that the change each year matters for the number of new and competing grants that can be made because, for a given year, it does not matter whether funds have been previously committed to RPGs or to other mechanisms.

This interval of time, in my view, is right around when the first of the GenXers were getting (or should have been getting) appointed Assistant Professor. Certainly, YHN was appointed in this interval.

Let us recall a couple of graphs. First, this one:

The red trace depicts success rates from 1962 to 2008 for R01 equivalents (R01, R23, R29, R37). Note that they are not broken down by experienced/new investigators status, nor are new applications distinguished from competing continuation applications. The blue line shows total number of applications reviewed...which may or may not be of interest to you. [update 7/12/12: I forgot to mention that the data in the 60s are listed as "estimated" success rates.]

Ok, Ok, Not much to see here, right? The 30% success rate was about the same in the doubling period as it was in the 80s. Now view this broken down by noobs and experienced investigators.
RPGsuccessbyYear.png
source

As we know from prior posts, career-stage differences matter a LOT. In the 80s when the overall success rate was 30%, you can see that newcomers were at about 20% and established investigators were enjoying at least a 17%age point advantage (I think these data also conflate competing continuation with new applications so there's another important factor buried in the "Experienced" trace.) Nevertheless, since the Experienced/New gap was similar from 1980 to 2006, we can probably assume it held true prior to that interval as well.

Again, first time applicants had about the same lack of success in the 80s as they did in the early stages of the doubling (ok, actually a few points higher in the 80s). About 20%. Things didn't go severely into the tanker for the noobs until the end of the doubling around 2004. But think of the career arc. A person who started in the 80s with their first grant jumped up to enjoy 30% success rates and a climbing trend. Someone who managed to land a five year R01 in 2000, conversely, faced steeply declining success rates just when they were ready to get their next grant 4-5 years later.

This is for Research Project Grants (R01, R03, R15, R21, R22, R23, R29, R33, R34, R35, R36, R37, R55, R56, RC1, P01, P42, PN1, U01, U19, UC1) and does not refer to the Centers or U54 that ScienceHound discussed. Putting his analysis and insider explanation (if you don't know, ScienceHound was NIGMS Director from 2003-2010) to work, we can assume that these RPG or R01-equiv success rates would have been much higher during the doubling, save for the choice of NIH not to devote the full largesse to RPGs.

So. Instead of restoring experienced investigator success to where it had been during the early 80s and instead of finally (finally) doing something about noob-investigator success rates that had resulted in handwringing since literally the start of the NIH (ok, the 60s anyway) the NIH decided to spend money on boondoggles.

The NIH decided to assign a disproportionate share of the doubling to the very best funded institutions and scientists using mechanisms that were mostly peer reviewed by....the best funded scientists from the best-funded institutions. One of the CSR rules, after all, is that apps for a given mechanism should be reviewed mostly by those who have obtained such a mechanism. You have to have an R01 to be in a regular R01-reviewing panel and P50/P60/P01 are reviewed mostly by those who have been funded by such mechanisms.

One way to look at this is that a lot of the doubling was sequestered from the riff-raff by design.

This is part of the reason that Gen X will never live up to its scientific potential. The full benefit of the doubling was never made available to us in a competitive manner. Large-mech projects under the elite, older generation kept us shadowed. Maybe a couple of us* shared in the Big-Mechanism wealth in minor form but we were by no means ready to make a play to lead them and get the full benefit. Meantime, our measly R01 applications were being beat up mercilessly by the established and compared unfavorably to Senior PI apps supported by their multi-R01 and BigMech labs.

The story is not over.

Given that I grew up as a scientist in this era, and given that like most of us I was pretty ignorant of longitudinal funding trends, etc, my perception was that a Big Mech was...expected. As in eventually, we were supposed to get to the point where not just the very tippy-top best of us, but basically anyone with maybe top-25% verve and energy could land a BigMech. Maybe a P01 Program Project, maybe a Center. The Late-Boomers felt it too. I saw several of the late Boomers get into this mode right as the badness struck. They were semi-outraged, let me tell you, when the nearly universal Program Officer response was "We're not funding P01s anymore. We suggest you don't submit one.".

AYFK? For people who were used to hearing POs say "We advise you to revise and resubmit" at the drop of a hat and who had never been told by a PO not to try (with a half decent idea) this was quite surprising. Especially when they looked at the lucky ducks who had put their Big Mechs together just a few years before....well there was a lot of screaming about bias and unfairness at first.

P01s are relatively easy for Program to shut down. As always, YMMV when it comes to NIH matters. But in general, I'd say that P01s tended to be a lot more fluid** than Centers (P50/P60). Once a Big Hitter group got a-hold of a Center award, they tended to stay funded. For decades. IME, anyway. or in my perception, more accurately.

Take a look at the history of Program Projects versus Centers in your field / favorite ICs, DearReader and report back, eh?

Don't get me wrong. There is much to like about Program Projects and Centers. Done right, they can be very good at shepherding the careers of transitioning / new scientists. But they are profoundly undemocratic and tend to consolidate NIH funding in the hands of the few elite of the IC in question. Often times they appear to be less productive than those of us not directly in them would calculate "should" happen for the the same expenditure on R01s. Such complaints are both right and wrong and often simultaneously when it comes to the same Center award. It is something that depends on your perspective and what you value and/or predict as outcome.

I can think of precisely one GenX Center Director in the stable of my favorite ICs at the moment. No doubt there are more because I don't do exhaustive review and I don't recognize every name to put to a face right off if I were to go RePORTERing. But still. I can rattle off tons of Boomer and pre-Boomer Center Directors.

It goes back to a point I made in a prior post. Gen X scientists were not just severely filtered. Even the ones that managed to transition to faculty appointments were delayed at every step. Funding came harder and at a delay. Real purchasing power was reduced. Publication expectations went up. We were not ready and able to take up the reins of larger efforts to anywhere near the same extent when we approached mid career. We could not rely upon clockwork schedules of grant renewal. We could not expect that a high percentage of our new proposals would be funded. We did not have as extensive a run of successful individual productivity on which to base a stretch for BigMech science.

And this comes back to a phenomenon ScienceHound identifies. The NIH decided*** to put a disproportionate share of the doubling monies into Centers rather than R01s for the struggling new PIs. This had a very long tail of lasting effects.

__
*I certainly did.

**Note: The P01 is considered an RPG with the R01s, etc, but Centers are not. There is some floofraw about these being "different pots of money" from an appropriation standpoint. They are not directly substitutable in immediate priority, the way I hear it.

***Any NIH insiders that start in on how Congress tied their hands can stop before starting. Appropriations language involved back and forth with NIH, believe me.

18 responses so far

NIH sued for promotion bias against women in the Intramural Research Program

Aug 29 2016 Published by under Intramural Research Programs, NIH, NIH Careerism

via Lenny Bernstein at the Washington Post:

What Bielekova doesn’t have, at age 47, is tenure, the coveted guarantee of recognition, job security and freedom to pursue controversial ideas that is critical to long-term success in an academic career. She was not put forward as a candidate for the second time last year, despite a positive recommendation from a panel of outside experts who reviewed her qualifications.

To me the kicker is this part. NIH intramural is weird the way they have big deal lab heads and a lot of career scientists under them that would be standard tenure rank folks elsewhere. So when the big deal head dies or retires it is always a little weird. Do they hand the lab to one of the folks already there? And boot the rest? Or do they spawn off a couple of new jobs? or find homes for people in other big-deal groups?

Bielekova alleges retaliation and discrimination based on gender after what she describes as a “power struggle” following the retirement of her mentor, who was chief of the neuro-immunology branch. She said male scientists were provided numerous advantages in the aftermath and that she has been harmed by groundless accusations from male colleagues of unprofessional conduct. A male colleague from her branch, she said, was nominated for tenure at the same time that she was held back.

Yep.

Amazingly Story Landis, prior NINDS Director, gave the full reveal quote:

While tenure awards are supposed to be based largely on merit, it is widely acknowledged that personality conflicts, budget constraints, internal politics and other factors affect them.

“Tenure decisions are complicated, and not just about what you’ve published,” Landis said.

In this, the NIH IRPs are no different than anywhere else, eh? It isn't about objective merits but about the subjective views of your colleagues, when it comes right down to it.

And lets in a whole lot o' bias.

Bielekova, ..has filed an Equal Employment Opportunity complaint against her institute’s director and two others,

Twill be interesting to watch this play out.

13 responses so far

Great lens to use on your own grants

Aug 26 2016 Published by under Grant Review, NIH, NIH Careerism

If your NIH grant proposal reads like this, it is not going to do well.

9 responses so far

Your Grant in Review: Throwing yourself on the mercy of the study section court

Aug 24 2016 Published by under Careerism, Grant Review, NIH, NIH Careerism, Uncategorized

A question and complaint from commenter musclestumbler on a prior thread introduces the issue.

So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it's obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.

and extends:

I personally go after R15 and R03 mechanisms because that's all that can be reasonably obtained at my university. ... Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue...

This isn't simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms - a "little kids table" if you will - but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.

The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!

I noted the CSR webpage on study section selection says:

Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.

It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.

I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting "appropriate expertise" when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.

But ultimately we come to the "mercy of the court" aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what "has" to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.

I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.

The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don't just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.

Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.

Sometimes you have to experiment a little with the NIH system. You'd be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.

27 responses so far

Projected NRSA salary scale for FY2017

NOT-OD-16-131 indicates the projected salary changes for postdoctoral fellows supported under NRSA awards.

Being the visual person that I am...
NRSAFY16-17chart

As anticipated, the first two years were elevated to meet the third year of the prior scale (plus a bit) with a much flatter line across the first three years of postdoctoral experience.

What think you o postdocs and PIs? Is this a fair* response to the Obama overtime rules?

Will we see** institutions (or PIs) where they just extend that shallow slope out for Years 3-7+?

h/t Odyssey and correction of my initial misread from @neuroecology
__
*As a reminder, $47,484 in 2016 dollars equals $39,715 in 2006 dollars, $30,909 in 1996 dollars and $21,590 in 1986 dollars. Also, the NRSA Yr 0 for postdocs was $20,292 for FY1997 and $36,996 for FY2006.

**I bet yes***.

***Will this be the same old jerks that already flatlined postdoc salaries? or will PIs who used to apply yearly bumps now be in a position where they just flatline since year 1 has increased so much?

38 responses so far

The R01 still doesn't pay for itself and reviewers are getting worse

Jul 11 2016 Published by under NIH, NIH Careerism, NIH funding

I pointed out some time ago that the full modular R01 grant from the NIH doesn't actually pay for itself.

In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget. Trainees on individual fellowships or training grants, undergrads working for free or work study discount, cross pollination with other grants in the lab (which often leads to whinging like your comment), pilot awards for small bits, faculty hard money time...all of these sources of extra effort are frequently poured into a one-R01 project. I think they are, in essence, necessary.

I had some additional thoughts on this recently.

It's getting worse.

Look, it has always been the case that reviewers want to see more in a grant proposal. More controls, usually. Extra groups to really nail down the full breadth of...whatever it is that you are studying. This really cool other line of converging evidence... anything is possible.

All I can reflect is my own experience in getting my proposals reviewed and in reviewing proposals that are somewhat in the same subfields.

What I see is a continuing spiral of both PI offerings and of reviewer demands.

It's inevitable, really. If you see a proposal chock full of nuts that maybe doesn't quite get over the line of funding because of whatever reason, how can you give a fundable score to a very awesome and tight proposal that is more limited?

Conversely, in the effort to put your best foot forward you, as applicant, are increasingly motivated to throw every possible tool at your disposal into the proposal, hoping to wow the reviewers into submission.

I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct". Years ago people used to crab about "overambitious" proposals but I can't say I've heard that in forever. In this day and age of tight NIH paylines, the promises of doing it all in one R01 full-modular 5 year interval are escalating.

These grants set a tone, btw. I'm here to tell you that I've seen subfield related proposals that do seem feasible, money-wise, get nailed because they are too limited in scope. In some cases there is enough study-section continuity involved for me to be certain that this is due to reviewer contamination from the aforementioned chock-full-o-nuts impossible proposals. Yes, some of this is due to SABV but not all of it. It ranges from "why you no include more co-investigators?" (a subtle spread-the-wealth knock on big labs? maybe) to "You really need to add X, Y and Z to be convincing" (mkay but... $250K dude) to "waaah, I just want to see more" (even though they don't really have a reason to list).

Maybe this is just me being stuck in the rut I was trained in. In my formative years, grant review seemed to expect you would propose a set of studies that you could actually accomplish within the time frame and budget proposed. I seem to remember study section members curbing each other with "Dude, the PI can't fit all that stuff into one proposal, back off.". I used to see revisions get improved scores when the PI stripped a bloated proposal down to a minimalist streamlined version.

Maybe we are just experiencing a meaningless sea change in grant review to where we propose the sky and nobody cares on competing renewal if we managed to accomplish all of that stuff.

38 responses so far

Power in the NIH review trenches

Jun 18 2016 Published by under Fixing the NIH, NIH, NIH Careerism

That extensive quote from a black PI who had participated in the ECR program is sticking with me.

Insider status isn't binary, of course. It is very fluid within the grant-funded science game. There are various spectra along multiple dimensions.

But make no mistake it is real. And Insider status is advantageous. It can be make-or-break crucial to a career at many stages.

I'm thinking about the benefits of being a full reviewer with occasional/repeated ad hoc status or full membership.

One of those benefits is that other reviewers in SEPs or closely related panels are less likely to mess with you.

Less likely.

It isn't any sort of quid pro quo guarantee. Of course not. But I guarantee that a reviewer who thinks this PI might be reviewing her own proposal in the near future has a bias. A review cant. An alerting response. Whatever.

It is different. And, I would submit, generally to the favor of the applicant that possesses this Mutually Assured Destruction power.

The Ginther finding arose from a thousand cuts, I argue. This is possibly one of them.

3 responses so far

« Newer posts Older posts »