Archive for the 'Grant Review' category

Nakamura reports on the ECR program

Jun 17 2016 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism

If I stroke out today it is all the fault of MorganPhD.

Jeffery Mervis continues with coverage of the NIH review situation as it pertains to the disparity for African-American PIs identified in 2011 (that's five years and fifteen funding rounds ago, folks) by the Ginther report.

The main focus for this week is on the Early Career Reviewer program. As you will recall, this blog has advocated continually and consistently for the participation of more junior PIs on grant review panels.

The ECR program was created explicitly to deal with underrepresented groups. However, what happened is that there was immediate opposition which insisted that the ECR program had to be open to all junior faculty/applicants, regardless of representation in the NIH game.

One-quarter of researchers in ECR's first cohort were from minority groups, he notes. “But as we've gone along, there are fewer underrepresented minorities coming into the pool.”
...
Minorities comprise only 13% of the roughly 5100 researchers accepted into the program (6% African-American and 7% Hispanic), a percentage that roughly matches their current representation on study sections.

Ok, but how have the ECR participants fared?

[Nakamura] said ECR alumni have been more than twice as successful as the typical new investigator in winning an R01 grant.

NIIIIIICE. Except they didn't flog the data as hard as one might hope. This is against the entire NI (or ESI?) population.

The pool of successful ECR alumni includes those who revised their application, sometimes more than once, after getting feedback on a declined proposal. That extra step greatly improves the odds of winning a grant. In contrast, the researchers in the comparison group hadn't gone through the resubmission process.

Not sure if this really means "hadn't" or "hadn't necessarily". The latter makes more sense if they are just comparing to aggregate stats. CSR data miners would have had to work harder to get this isolated to those who hadn't revised yet, and I suspect if they had gone to that effort, they could have presented the ESIs who had at least one revision under their belt. But what about the underrepresented group of PIs that are the focus of all this effort?

It's also hard to interpret the fact that 18% of the successful ECRs were underrepresented minorities because NIH did not report the fraction of minorities among ECR alumni applicants. So it is not clear whether African-Americans participating in the program did any better than the cohort as a whole—suggesting that the program might begin to close the racial gap—or better than a comparable group of minority scientists who were not ECR alumni.

SERIOUSLY Richard Nakamura? You just didn't happen to request your data miners do the most important analysis? How is this even possible?

How on earth can you not be keeping track of applicants to ECR, direct requests from SROs, response rate and subsequent grant and reviewing behavior? It is almost as if you want to look like you are doing something but have no interest in it being informative or in generating actionable intelligence.

Moving along, we get a further insight into Richard Nakamura and his position in this situation.

Nakamura worries that asking minority scientists to play a bigger role in NIH's grantsmaking process could distract them from building up their lab, finding stable funding, and earning tenure. Serving on a study section, he says, means that “those individuals will have less time to write applications. So we need to strike the right balance.”

Paternalistic nonsense. The same thing that Scarpa tried to use to justify his purge of Assistant Professors from study sections. My answer is the same. Let them decide. For themselves. Assistant Professors and underrepresented PIs can decide for themselves if they are ready and able to take up a review opportunity when asked. Don't decide, paternalistically, that you know best and will refrain from asking for their own good, Director Nakamura!

Fascinatingly, Mervis secured an opinion that echoes this. So Nakamura will surely be reading it:

Riggs, the only African-American in his department, thinks the program is too brief to help minority scientists truly become part of the mainstream, and may even exacerbate their sense of being marginalized.

“After I sat on the panel, I realized there was a real network that exists, and I wasn't part of that network,” he says. “My comments as a reviewer weren't taken as seriously. And the people who serve on these panels get really nervous about having people … that they don't know, or who they think are not qualified, or who are not part of the establishment.”

If NIH “wants this to be real,” Riggs suggests having early-career researchers “serve as an ECR and then call them back in 2 years and have them serve a full cycle. I would have loved to do that.”

The person in the best position to decide what is good or bad for his or her career is the investigator themself.

This comment also speaks to my objection to the ECR as a baby-intro version of peer review. It isn't necessary. I first participated on study section in my Asst Prof years as a regular ad hoc with a load of about six grants, iirc. Might have been 2 less than the experienced folks had but it was not a baby-trainee experience in the least. I was treated as a new reviewer, but that was about the extent of it. I thought I was taken seriously and did not feel patronized.

__
Related Reading:
Toni Scarpa to leave CSR

More on one Scientific Society’s Response to the Scarpa Solicitation

Your Grant In Review: Junior Reviewers Are Too Focused on Details

The problem is not with review…

Peer Review: Opinions from our Elders

23 responses so far

Your Grant in Review: Power analysis and the Vertebrate Animals Section

Feb 11 2016 Published by under Grant Review, Grantsmanship, NIH funding

As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.

Simplification! Cool, right?

There's a landmine here.

For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.

The notice says:

Summary of Changes
The VAS criteria are simplified by the following changes:

  • A description of veterinary care is no longer required.

  • Justification for the number of animals has been eliminated.

  • A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.

 

This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".

So back to the old way we must go. Leave space for your power analysis, folks.
__
If you don't know much about doing a power analysis, this website is helpful: http://homepage.stat.uiowa.edu/~rlenth/Power/

17 responses so far

Your Grant in Review: Competing Continuation, aka Renewal, Apps

Jan 28 2016 Published by under Grant Review, NIH, NIH Careerism

In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

15 responses so far

Thought of the Day

Oct 16 2015 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

I know this NIH grant game sucks.

I do.

And I feel really pained each time I get email or Twitter messages from one of my Readers (and there are many of you, so this isn't as personal as it may seem to any given Reader) who are desperate to find the sekrit button that will make the grant dollars fall out of the hopper.

I spend soooooo much of my discussion on this blog trying to explain that NOBODY CAN TELL YOU WHERE THE SEKRIT BUTTON IS BECAUSE IT DOESN'T EXIST!!!!!!!!!!!!

Really. I believe this down to the core of my professional being.

Sometimes I think that the problem here is the just-world fallacy at work. It is just so dang difficult to give up on the notion that if you just do your job, the world will be fair. If you do good work, you will eventually get the grant funding to support it. That's what all the people you trained around seemed to experience and you are at least as good as them, better in many cases, so obviously the world owes you the same sort of outcome.

I mean yeah, we all recognize things are terrible with the budget and we expect it to be harder but.....maybe not quite this hard?

I feel it too.

Believing in a just-world is really hard to shed.

69 responses so far

Grantsmack: The logic of hypothesis testing

Aug 26 2015 Published by under Grant Review, NIH, NIH Careerism

NIH grant review obsesses over testing hypotheses. Everyone knows this.

If there is a Stock Critique that is a more reliable way to kill a grant's chances than "There is no discernible hypothesis under investigation in this fishing expedition", I'd like to know what it is.

The trouble, of course, is that once you've been lured into committing to a hypothesis then your grant can be attacked for whether your hypothesis is likely to be valid or not.

A special case of this is when some aspect of the preliminary data that you have included even dares to suggest that perhaps your hypothesis is wrong.

Here's what bothers me. It is one thing if you have Preliminary Data suggesting some major methodological approach won't work. That is, that your planned experiment cannot result in anything like interpretable data that bears on the ability to falsify the hypothesis. This I would agree is a serious problem for funding a grant.

But any decent research plan will have experiments that converge to provide different levels and aspects of testing for the hypothesis. It shouldn't rest on one single experiment or it is a prediction, not a real hypothesis. Some data may tend to support and some other data may tend to falsify the hypothesis. Generally speaking, in science you are not going to get really clean answers every time for every single experiment. If you do.....well, let's just say those Golden Scientist types have a disproportionate rate of being busted for faking data.

So.

If you have one little bit of Preliminary Data in your NIH Grant application that maybe, perhaps is tending to reject your hypothesis, why is this of any different value than if it had happened to support your hypothesis?

What influence should this have on whether it is a good idea to do the experiments to fully test the hypothesis that has been advanced?

Because that is what grant review should be deciding, correct? Whether it is a good idea to do the experiments. Not whether or not the outcome is likely to be A or B. Because we cannot predict that.

If we could, it wouldn't be science.

45 responses so far

Grantsmack: Overambitious

Aug 25 2015 Published by under Grant Review, NIH, NIH Careerism, NIH funding

If we are entering a period of enthusiasm for "person, not project" style review of NIH grants, then it is time to retire the criticism of "the research plan is overambitious".

Updated:
There was a comment on the Twitters to the effect that this Stock Critique of "overambitious" is a lazy dismissal of an application. This can use some breakdown because to simply dismiss stock criticisms as "lazy" review will fail to address the real problem at hand.

First, it is always better to think of Stock Critique statements as shorthand rather than lazy.

Using the term "lazy" seems to imply that the applicant thinks that his or her grant application deserves a full and meticulous point-by-point review no matter if the reviewer is inclined to award it a clearly-triagable or a clearly-borderline or clearly-fundable score. Not so.

The primary job of the NIH Grant panel reviewer is most emphatically not to help the PI to funding nor to improve the science. The reviewer's job is to assist the Program staff of the I or C which has been assigned for potential funding decide whether or not to fund this particular application. Consequently if the reviewer is able to succinctly communicate the strengths and weaknesses of the application to the other reviewers, and eventually Program staff, this is efficiency, not laziness.

The applicant is not owed a meticulous review.

With this understood, we move on to my second point. The use of a Stock Criticism is an efficient communicative tool when the majority of the review panel agrees that the substance underlying this review consideration is valid. That is, that the notion of a grant application being overambitious is relevant and, most typically, a deficiency in the application. This is, to my understanding, a point of substantial agreement on NIH review panels.

Note: This is entirely orthogonal to whether or not "overambitious" is being applied fairly to a given application. So you need to be clear about what you see as the real problem at hand that needs to be addressed.

Is it the notion of over-ambition being any sort of demerit? Or is your complaint about the idea that your specific plan is in fact over-ambitious?

Or are you concerned that it is unfair if the exact same plan is considered "over-ambitious" for you and "amazingly comprehensive vertically ascending and exciting" when someone else's name is in the PI slot?

Relatedly, are you concerned that this Stock Critique is being applied unjustifiably to certain suspect classes of PI?

Personally, I think "over-ambitious" is a valid critique, given my pronounced affection for the NIH system as project-based, not person-based. In this I am less concerned about whether everything the applicant has been poured into this application will actually get done. I trust PIs (and more importantly, I trust the contingencies at work upon a PI) of any stage/age to do interesting science and publish some results. If you like all of it, and would give a favorable score to a subset that does not trigger the Stock Critique, who cares that only a subset will be accomplished*?

The concerning issue is that a reviewer cannot easily tell what is going to get done. And, circling back to the project-based idea, if you cannot determine what will be done as a subset of the overambitious plan, you can't really determine what the project is about. And in my experience, for any given application, there are going to usually be parts that really enthuse you as a reviewer and parts that leave you cold.

So what does that mean in terms of my review being influenced by these considerations? Well, I suppose the more a plan creates an impression of priority and choice points, the less concern I will have. If I am excited by the vast majority of the experiments, the less concern I will have-if only 50% of this is actually going to happen, odds are good if I am fired up about 90% of what has been described.

*Now, what about those grants where the whole thing needs to be accomplished or the entire point is lost? Yes, I recognize those exist. Human patient studies where you need to get enough subjects in all the groups to have any shot at any result would be one example. If you just can't collect and run that many subjects within the scope of time/$$ requested, well.....sorry. But these are only a small subset of the applications that trigger the "overambitious" criticism.

42 responses so far

Repost: Don't tense up

Aug 07 2015 Published by under Careerism, Grant Review, Grantsmanship, NIH, NIH Careerism

I've been in need of this reminder myself in the past year or so. This originally went up on the blog 25 September, 2011.


If you've been going through a run of disappointing grant reviews punctuated by nasty Third Reviewer comments, you tend to tense up.

Your next proposals are stiff...and jam packed with what is supposed to be ammunition to ward off the criticisms you've been receiving lately. Excessive citation of the lit to defend your hypotheses...and buffer concentrations. Review paper level exposition of your logical chain. Kitchen sink of preliminary data. Exhaustive detail of your alternate approaches.

The trouble is, then your grant is wall to wall text and nearly unreadable.

Also, all that nitpicky stuff? Sometimes it is just post hoc justification by reviewers who don't like the whole thing for reasons only tangentially related to the nits they are picking.

So your defensive crouch isn't actually helping. If you hook the reviewer hard with your big picture stuff they will often put up with a lot of seeming StockCritique bait.

25 responses so far

Thought of the Day

Jun 22 2015 Published by under BlogBlather, Grant Review, Grantsmanship

I cannot tell you how comforting it is to know that no matter the depths and pedantry of my grant geekery, there is always a certain person to be found digging away furiously below me.

9 responses so far

Gender smog in grant review

Jun 19 2015 Published by under Gender, Grant Review, NIH, NIH Careerism

I noticed something really weird and totally unnecessary.

When you are asked to review grants for the NIH you are frequently sent a Word document review template that has the Five Criteria nicely outlined and a box for you to start writing your bullet points. At the header to each section it sometimes includes some of the wording about how you are supposed to approach each criterion.

A recent template I received says under Investigator that one is to describe how the

..investigator’s experience and qualifications make him particularly well-suited for his roles in the project?

Grrr.

12 responses so far

Re-Repost: The funding is the science II, "Why do they always drop the females?"

The NIH has recently issued the first round of guidance on inclusion of Sex as a Biological Variable in future NIH research grants. I am completely behind the spirit of the initiative but I have concerns about how well this is going to work in practice. I wrote a post in 2008 that detailed some of the reasons that have brought us to the situation where the Director of the NIH felt he had to coauthor an OpEd on this topic. I believe these issues are still present, will not be magically removed with new instructions to reviewers and need to be faced head-on if the NIH is to make any actual progress on ensuring SABV is considered appropriately going forward.

The post originally appeared December 2, 2008.


The title quote came from one of my early, and highly formative, experiences on study section. In the course of discussing a revised application it emerged that the prior version of the application had included a sex comparison. The PI had chosen to delete that part of the design in the revised application, prompting one of the experienced members of the panel to ask, quite rhetorically, "Why do they always drop the females?"

I was reminded of this when reading over Dr. Isis' excellent post [Update: Original Sb post lost, I think the repost can be found here] on the, shall we say less pernicious, ways that the course of science is slanted toward doing male-based research. Really, go read that post before you continue here, it is a fantastic description.

What really motivated me, however, was a comment from the always insightful Stephanie Z:

Thank you. That's the first time I've seen someone address the reasons behind ongoing gender disparities in health research. I still can't say as it thrills me (or you, obviously), but I understand a bit better now.

Did somebody ring?

As I pointed out explicitly at least once ([Update: Original 2007 post]), research funding has a huge role in what science actually gets conducted. Huge. In my book this means that if one feels that an area of science is being systematically overlooked or minimized, one might want to take a close look at the manner by which science is funded and the way by which science careers are sustained as potential avenues for systematic remedy.

Funding

There are a couple of ways in which the generalized problems with NIH grant review lead to the rhetorical comment with which I opened the post. One very common StockCritique of NIH grant review is that of an "over ambitious" research plan. As nicely detailed in Isis' post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not "well established" and needs to be "compared to the standard gonadectomy models", LOL >sigh<.

manWomanControlPanel.jpg
Grant reviewers prefer simplicity
Keep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as "does this drug compound ameliorate cocaine addiction?" So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost? Trust me, the grant proposal is going to take fire for "scope of the project".

Another StockCritique to blame is "feasibility". Two points here really. First is the question of Preliminary Data- of course if you have to run more experimental conditions to establish that you might have a meritorious hypothesis, you are less likely to do it with a fixed amount of pilot/startup/leftover money. Better to work on preliminary data for two or three distinct applications over just one if you have the funds. Second aspect has to do with a given PIs experience with the models in question. More opportunity to say "The PI has no idea what s/he is doing methodologically" if s/he has no prior background with the experimental conditions, which are almost always the female-related ones. As we all know, it matters little that the hormonal assays or gonadectomy or whatever procedures have been published endlessly if you don't have direct evidence that you can do it. Of course, more latitude is extended to the more-experienced investigator....but then s/he is less likely to jump into gender-comparisons in a sustained way in contrast to a newly minted PI.

Then there are the various things under grantspersonship. You have limited space in a given type of grant application. The more groups and comparisons, the more you have to squeeze in with respect to basic designs, methods and the interpretation/alternative approaches part. So of course you leave big windows for critiques of "hasn't fully considered...." and "it is not entirely clear how the PI will do..." and "how the hypothesis will be evaluated has not been sufficiently detailed...".

Career

Although research funding plays a huge role in career success, it is only part of the puzzle. Another critical factor is what we consider to be "great" or "exciting" science in our respective fields.

The little people can fill in the details. This is basically the approach of GlamourMagz science. (This is a paraphrase of something the most successful GlamourMagz PI I know actually says.) Cool, fast and hot is not compatible with the metastasizing of experimental conditions that is an inevitable feature of gender-comparison science. Trouble is, this approach tends to trickle down in various guises. Lower (than GlamourMag) impact factor journals sometimes try to upgrade by becoming more NS-like (Hi, J Neuro!). Meticulous science and exacting experimental designs are only respected (if at all) after the fact. Late(r) in someone's career they start getting props on their grant reviews for this. Early? Well the person hasn't yet shown the necessity and profit for the exhaustive designs and instead they just look...unproductive. Like they haven't really shown anything yet.

As we all know splashy CNS pubs on the CV trump a sustained area of contribution in lower journals six ways to Sunday. This is not to say that nobody will appreciate the meticulous approach, they will. Just to say that high IF journal pubs will trump. Always.

So the smart young PI is going to stay away from those messy sex-differences studies. Everything tells her she should. If he does dip a toe, he's more likely to pay a nasty career price.
This is why NIH efforts to promote sex-comparison studies are necessary. Promoting special funding opportunities are the only way to tip the equation even slightly more favorable to the sex-differences side. The lure of the RFA is enough to persuade the experienced PI to write in the female groups. To convince the new PI that she might just risk it this one time.

My suspicion is that it is not enough. Beyond the simple need to take a stepwise approach to the science as detailed by Isis, the career and funding pressures are irresistible forces.

9 responses so far

Older posts »