Nobody who is younger than me in the scientific generation sense should ever be manually entering references in manuscripts or grant applications.
Nobody who is younger than me in the scientific generation sense should ever be manually entering references in manuscripts or grant applications.
We're several rounds of grant submission/review past the NIH's demand that applications consider Sex As a Biological Variable (SABV). I have reviewed grants from the first round of this obligation until just recently and have observed a few things coming into focus. There's still a lot of wiggle and uncertainty but I am seeing a few things emerge in my domains of grants that include vertebrate animals (mostly rodent models).
1) It is unwise to ignore SABV.
2) Inclusion of both sexes has to be done judiciously. If you put a sex comparison in the Aim or too prominently as a point of hypothesis testing you are going to get the full blast of sex-comparisons review. Which you want to avoid because you will get killed on the usual- power, estrus effects that "must" be there, various caveats about why male and female rats aren't the same - behaviorally, pharmacokinetically, etc etc - regardless of what your preliminary data show.
3) The key is to include both sexes and say you will look at the data to see if there appears to be any difference. Then say the full examination will be a future direction or slightly modify the subsequent experiments.
4) Nobody seems to be fully embracing the SABV concept coming from the formal pronouncements about how you use sample sizes that are half males and half females into perpetuity if you don't see a difference. I am not surprised. This is the hardest thing for me to accept personally and I know for certain sure manuscript reviewers won't go for it either.
Then there comes the biggest categorical split in approach that I have noticed so far.
5a) Some people appear to use a few targeted female-including (yes, the vast majority still propose males as default and females as the SABV-satisfying extra) experiments to check main findings.
5b) The other take is just to basically double everything up and say "we'll run full groups of males and females". This is where it gets entertaining.
I have been talking about the fact that the R01 doesn't pay for itself for some time now.
A full modular, $250K per year NIH grant doesn't actually pay for itself.
the $250K full modular grant does not pay for itself. In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget.
I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct".
Well, "we're going to duplicate everything in females" as a response to the SABV initiative just administered the equivalent of HGH to this trend. There is approximately zero real world dealing with this in the majority of grants that slap in the females and from what I have seen no comment whatever from reviewers on feasibility. We are just entirely ignoring this.
What I am really looking forward to is the review of grants in about 3 years time. At that point we are going to start seeing competing continuation applications where the original promised to address SABV. In a more general sense, any app from a PI who has been funded in the post-SABV-requirement interval will also face a simple question.
Has the PI addressed SABV in his or her work? Have they taken it seriously, conducted the studies (prelim data?) and hopefully published some things (yes, even negative sex-comparisons)?
If not, we should, as reviewers, drop the hammer. No more vague hand wavy stuff like I am seeing in proposals now. The PI had better show some evidence of having tried.
What I predict, however, is more excuse making and more bad faith claims to look at females in the next funding interval.
Please prove me wrong, scientists in my fields of study.
As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.
Simplification! Cool, right?
There's a landmine here.
For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.
The notice says:
Summary of Changes
The VAS criteria are simplified by the following changes:
A description of veterinary care is no longer required.
Justification for the number of animals has been eliminated.
A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.
This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".
So back to the old way we must go. Leave space for your power analysis, folks.
If you don't know much about doing a power analysis, this website is helpful: http://homepage.stat.uiowa.edu/~rlenth/Power/
Bjoern Brembs has posted a lengthy complaint about the errors of fact made by incompetent reviewers of his grant application.
I get it. I really do. I could write a similar penetrating expose of the incompetence of reviewers on at least half of my summary statements.
And I will admit that I probably have these thoughts running through my mind on the first six or seven reads of the summary statements for my proposals.
But I'm telling you. You have to let that stuff eventually roll off you like water off the proverbial duck's back. Believe me*.
Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP.
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers.
Speaking for the NIH grant system only, you are an idiot if you expect this level of "expert peer" as the assigned reviewers to each and every one of your applications. I am not going to pretend to be an expert in this issue but even I can suspect that the body of work on this area does not lead each and every person who is "expert" to the same conclusion. And therefore even an expert might disagree with Brembs on what reviewers should "recognize". A less-than-expert is going to be subject to a cursory or rapid reading of related literature or, perhaps, an incomplete understanding from a prior episode of attending to the issue.
As a grant applicant, I'm sorry, but it is your job to make your interpretations clear, particularly if you know there are papers pointing in different directions in the literature.
More 'tude from the Brembster:
For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.
These are repeated several times triumphantly as if they are some excellent sick burn. Don't think like this. First, NIH reviewers are not expected to do a lot of outside research reading your papers (or others') to apprehend the critical information needed to appreciate your proposal. Second, NIH reviewers are explicitly cautioned not to follow links to sites controlled by the applicant. DO. NOT. EXPECT. REVIEWERS. TO. READ. YOUR. BLOG! ...or your papers.
With respect to "graduate student level", it will be better for you to keep in mind that many peers who do not work directly in the narrow topic you are proposing to study have essentially a graduate student level acquaintance with your topic. Write your proposal accordingly. Draw the reader through it by the hand.
I know this NIH grant game sucks.
And I feel really pained each time I get email or Twitter messages from one of my Readers (and there are many of you, so this isn't as personal as it may seem to any given Reader) who are desperate to find the sekrit button that will make the grant dollars fall out of the hopper.
I spend soooooo much of my discussion on this blog trying to explain that NOBODY CAN TELL YOU WHERE THE SEKRIT BUTTON IS BECAUSE IT DOESN'T EXIST!!!!!!!!!!!!
Really. I believe this down to the core of my professional being.
Sometimes I think that the problem here is the just-world fallacy at work. It is just so dang difficult to give up on the notion that if you just do your job, the world will be fair. If you do good work, you will eventually get the grant funding to support it. That's what all the people you trained around seemed to experience and you are at least as good as them, better in many cases, so obviously the world owes you the same sort of outcome.
I mean yeah, we all recognize things are terrible with the budget and we expect it to be harder but.....maybe not quite this hard?
I feel it too.
Believing in a just-world is really hard to shed.
I've been in need of this reminder myself in the past year or so. This originally went up on the blog 25 September, 2011.
If you've been going through a run of disappointing grant reviews punctuated by nasty Third Reviewer comments, you tend to tense up.
Your next proposals are stiff...and jam packed with what is supposed to be ammunition to ward off the criticisms you've been receiving lately. Excessive citation of the lit to defend your hypotheses...and buffer concentrations. Review paper level exposition of your logical chain. Kitchen sink of preliminary data. Exhaustive detail of your alternate approaches.
The trouble is, then your grant is wall to wall text and nearly unreadable.
Also, all that nitpicky stuff? Sometimes it is just post hoc justification by reviewers who don't like the whole thing for reasons only tangentially related to the nits they are picking.
So your defensive crouch isn't actually helping. If you hook the reviewer hard with your big picture stuff they will often put up with a lot of seeming StockCritique bait.
I cannot tell you how comforting it is to know that no matter the depths and pedantry of my grant geekery, there is always a certain person to be found digging away furiously below me.
The NIH has recently issued the first round of guidance on inclusion of Sex as a Biological Variable in future NIH research grants. I am completely behind the spirit of the initiative but I have concerns about how well this is going to work in practice. I wrote a post in 2008 that detailed some of the reasons that have brought us to the situation where the Director of the NIH felt he had to coauthor an OpEd on this topic. I believe these issues are still present, will not be magically removed with new instructions to reviewers and need to be faced head-on if the NIH is to make any actual progress on ensuring SABV is considered appropriately going forward.
The post originally appeared December 2, 2008.
The title quote came from one of my early, and highly formative, experiences on study section. In the course of discussing a revised application it emerged that the prior version of the application had included a sex comparison. The PI had chosen to delete that part of the design in the revised application, prompting one of the experienced members of the panel to ask, quite rhetorically, "Why do they always drop the females?"
I was reminded of this when reading over Dr. Isis' excellent post [Update: Original Sb post lost, I think the repost can be found here] on the, shall we say less pernicious, ways that the course of science is slanted toward doing male-based research. Really, go read that post before you continue here, it is a fantastic description.
Thank you. That's the first time I've seen someone address the reasons behind ongoing gender disparities in health research. I still can't say as it thrills me (or you, obviously), but I understand a bit better now.
Did somebody ring?
As I pointed out explicitly at least once ([Update: Original 2007 post]), research funding has a huge role in what science actually gets conducted. Huge. In my book this means that if one feels that an area of science is being systematically overlooked or minimized, one might want to take a close look at the manner by which science is funded and the way by which science careers are sustained as potential avenues for systematic remedy.
There are a couple of ways in which the generalized problems with NIH grant review lead to the rhetorical comment with which I opened the post. One very common StockCritique of NIH grant review is that of an "over ambitious" research plan. As nicely detailed in Isis' post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not "well established" and needs to be "compared to the standard gonadectomy models",
Grant reviewers prefer simplicityKeep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as "does this drug compound ameliorate cocaine addiction?" So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost? Trust me, the grant proposal is going to take fire for "scope of the project".
Another StockCritique to blame is "feasibility". Two points here really. First is the question of Preliminary Data- of course if you have to run more experimental conditions to establish that you might have a meritorious hypothesis, you are less likely to do it with a fixed amount of pilot/startup/leftover money. Better to work on preliminary data for two or three distinct applications over just one if you have the funds. Second aspect has to do with a given PIs experience with the models in question. More opportunity to say "The PI has no idea what s/he is doing methodologically" if s/he has no prior background with the experimental conditions, which are almost always the female-related ones. As we all know, it matters little that the hormonal assays or gonadectomy or whatever procedures have been published endlessly if you don't have direct evidence that you can do it. Of course, more latitude is extended to the more-experienced investigator....but then s/he is less likely to jump into gender-comparisons in a sustained way in contrast to a newly minted PI.
Then there are the various things under grantspersonship. You have limited space in a given type of grant application. The more groups and comparisons, the more you have to squeeze in with respect to basic designs, methods and the interpretation/alternative approaches part. So of course you leave big windows for critiques of "hasn't fully considered...." and "it is not entirely clear how the PI will do..." and "how the hypothesis will be evaluated has not been sufficiently detailed...".
Although research funding plays a huge role in career success, it is only part of the puzzle. Another critical factor is what we consider to be "great" or "exciting" science in our respective fields.
The little people can fill in the details. This is basically the approach of GlamourMagz science. (This is a paraphrase of something the most successful GlamourMagz PI I know actually says.) Cool, fast and hot is not compatible with the metastasizing of experimental conditions that is an inevitable feature of gender-comparison science. Trouble is, this approach tends to trickle down in various guises. Lower (than GlamourMag) impact factor journals sometimes try to upgrade by becoming more NS-like (Hi, J Neuro!). Meticulous science and exacting experimental designs are only respected (if at all) after the fact. Late(r) in someone's career they start getting props on their grant reviews for this. Early? Well the person hasn't yet shown the necessity and profit for the exhaustive designs and instead they just look...unproductive. Like they haven't really shown anything yet.
As we all know splashy CNS pubs on the CV trump a sustained area of contribution in lower journals six ways to Sunday. This is not to say that nobody will appreciate the meticulous approach, they will. Just to say that high IF journal pubs will trump. Always.
So the smart young PI is going to stay away from those messy sex-differences studies. Everything tells her she should. If he does dip a toe, he's more likely to pay a nasty career price.
This is why NIH efforts to promote sex-comparison studies are necessary. Promoting special funding opportunities are the only way to tip the equation even slightly more favorable to the sex-differences side. The lure of the RFA is enough to persuade the experienced PI to write in the female groups. To convince the new PI that she might just risk it this one time.
My suspicion is that it is not enough. Beyond the simple need to take a stepwise approach to the science as detailed by Isis, the career and funding pressures are irresistible forces.
We spend a fair amount of time talking about grant strategy on this blog. Presumably, this is a reflection of an internal process many of us go through trying to decide how to distribute our grant writing effort so as to maximize our chances of getting funded. After all we have better things to do than to write grants.
So we scrutinize success rates for various ICs, various mechanisms, FOAs, etc as best we are able. We flog RePORTER for evidence of which study sections will be most sympathetic to our proposals and how to cast our applications so as to be attractive. We worry about how to construct our Biosketch and who to include as consultants or collaborators. We obsess over how much preliminary data is enough (and too much*).
This is all well and good and maybe, maybe....perhaps....it helps.
But at some level, you have to follow your gut, too. Even when the odds seem overwhelmingly bad, there are going to be times when dang it, you just feel like this is the right thing to do.
Submitting an R01 on very thin preliminary data because it just doesn't work as an R21 perhaps.
Proposing an R03 scope project even if the relevant study section has only one** of them funded on the RePORTER books.
Submitting your proposal when the PO who will likely be handling it has already told you she hates your Aims***.
Revising that application that has been triaged twice**** and sending it back in as a A2asA0 proposal.
I would just advise that you take a balanced approach. Make your riskier attempts, sure, but balance those with some less risky applications too.
I view it as....experimenting.
*Just got a question about presenting too much preliminary data the other day.
**of course you want to make sure there is not a structural issue at work, such as the section stopped reviewing this mechanism two years ago.
***1-2%ile scores have a way of softening the stony cold heart of a Program Officer. Within-payline skips are very, very rare beasts.
****one of my least strategic behaviors may be in revising grants that have been triaged. Not sure I've ever had one funded after initial triage and yet I persist. Less so now than I used to but.....I have a tendency. Hard headed and stupid, maybe.
It is one of the most perplexing things of my career and I still don't completely understand why this is the case. But it is important for PIs, especially those who have not yet experienced study section, to understand a simple fact of life.
The NIH Program Officers do not completely understand what contributes to the review and scoring of your grant application.
My examples are legion and I have mentioned some of them in prior blog posts over the years.
The advice from a PO that PIs (such as myself) just needed to "write better grants" when I was already through a stint on study section and had read many, many crappy and yet funded grants from more established investigators.
The observation that transitioning investigators "shouldn't take that job" because it was soft money and K grants were figuring heavily in the person's transition/launch plans.
Apparently honest wonder that reviewers do not read their precious Program Announcements and automatically award excellent scores to applications just because they align with the goals of the PA.
Ignorance of the revision queuing that was particularly endemic during the early part of my career (and pretend? ignorance that limiting applications to one revision round made no functional difference in this).
The "sudden discovery" that all of the New Investigator grants during the checkbox era were going to well-established investigators who simply happened not to have NIH funding before, instead of boosting the young / recently appointed investigators.
An almost comically naive belief that study section outcome for grants really is an unbiased reflection of grant merit.
I could go on.
The reason this is so perplexing to me is that this is their job. POs [eta: used to] sit in on study section meetings or listen in on the phone. At least three times a year but probably more often given various special emphasis panels and the assignment of grants that might be reviewed in any of several study sections. They even take notes and are supposed to give feedback to the applicant with respect to the tenor of the discussion. They read any and all summary statements that they care to. They read (or can read) a nearly dizzying array of successful and unsuccessful applications.
And yet they mostly seem so ignorant of dynamics that were apparent to me after one, two or at the most three study section meetings.
It is weird.
The takeaway message for less NIH-experienced applicants is that the PO doesn't know everything. I'm not saying they are never helpful....they are. Occasionally very helpful. Difference between funded and not-funded helpful. So I fully endorse the usual advice to talk to your POs early and often.
Do not take the PO word for gospel, however. Take it under advisement and integrate it with all of your other sources of information to try to decide how to advance your funding strategy.