Have you ever been reading a scientific paper and thought "Gee, they really should have cited us here"?
Have you ever been reading a scientific paper and thought "Gee, they really should have cited us here"?
I want to get to the point in my career where other people write the majority of the papers & I just revise them & make sure they are right
— Sciencegurl (@sciencegurlz0) September 29, 2016
The NIH FOAs come in many flavors of specificity. Some, usually Program Announcements, are very broad and appear to permit a wide range of applications to fit within them. My favorite example of this is NIDA's "Neuroscience Research on Drug Abuse" PA.
They also come in highly specific varieties, generally as RFAs.
The targeted FOA is my topic for the day because they can be frustrating in the extreme. No matter how finely described for the type, these FOA are inevitably too broad to let each and every interested PI know exactly how to craft her application. Or, more importantly, whether to bother. There is always a scientific contact, a Program Officer, listed so the first thing to do is email or call this person. This can also be frustrating. Sometimes one gets great advice, sometimes it is perplexing.
As always, I can only offer up the way I look at these things.
As an applicant PI facing an FOA that seems vaguely of interest to me, I have several variables that are at play. First, despite the fact that Program may have written the FOA in a particular way, this doesn't mean that they really know what they want. The FOA language may be a committee result or it may just not have been thought that a highly specific type of proposal was necessary to satisfy what goals and motivations existed.
Second, even if they do know what they want in Programville, peer review is always the primary driver. If you can't escape triage it is highly unlikely that Program will fund your application, even if it fits their intent to a T. So as the applicant PI, I have to consider how peers are likely to interpret the FOA and how they are likely to apply it to my application. It is not impossible that the advice and perspective given to the prospective PI by the contact PO flies rather severely in the face of that PIs best estimate of what is likely to occur during peer review. This leaves a conundrum.
How to best navigate peer review and also serve up a proposal that is attractive to Program, in case they are looking to reach down out of the order of review for a proposal that matches what they want.
Finally, as I mention now and again there is an advocacy role for the PI when applying for NIH funding. It is part and parcel of the job of the PI to tell Program what they should be funding. By, of course, serving up such a brilliantly argued application that they see that your take on their FOA is the best take. Even if this may not have been what was their intent in the first place. This also, btw, applies to the study section members. Your job is in part to convince them, not to meet whatever their preconceptions or reading of the FOA might be.
Somehow, the PI has to stew all of these considerations together and come up with a plan for the best possible proposal. Unfortunately, you can miss the mark. Not because your application is necessarily weak or your work doesn't fit the FOA in some objective sense. Merely because you have decided to make choices, gambles and interpretations that have led you in a particular direction, which may very well be the "wrong" direction.
Most severely, you might be rejected without review. This can happen. If you do not meet the PO's idea of being within the necessary scope of what they would ever plan to fund, no matter the score, you could have your application prevented from being routed to the study section.
Alternately, you might get triaged by a panel that just doesn't see it your way. That wonders if you, the idiot PI, was reading the same FOA that they are. It happens.
Finally, you might get a good score and Program may decide to skip over it for lack of responsiveness to their intent. Or you may be in the grey zone and fail to get a pickup because other grants scoring below yours are deemed closer to what they want to fund.
My point for today is that I think this is necessary error in the system. It is not evidence of a wholesale problem with the NIH FOA approach if you shoot wide to the left. If you fail to really understand the intent of the FOA as written. Or if you come away from your initial chat with the PO with a misguided understanding. Or even if you run into the buzzsaw of a review panel that rebels against the FOA.
Personally, I think you just have to take your chances. Arrive at your best understanding of what the FOA intends and how the POs are going to interpret various proposals. Sure. And craft your application accordingly. But you have to realize that you may be missing the point entirely. You may fail to convince anyone of your brilliant take on the FOA's stated goals. This doesn't mean the system is broken.
So take your shots. Offer up your best interpretation on how to address the goals. And then bear down and find the next FOA and work on that. In case your first shot sails over the crossbar.
It always fascinates me how fairly wide-flung experiences with NIH funding coalesce around the same issue sometimes. This particular post was motivated by no less than three situations being brought to my attention in the past week. Different ICs, different FOA, different mechanisms and vastly different topics and IC intentions. But to me, the answers are the same.
WOW. This comment from dsks absolutely nails it to the wall.
The NIH is supposed to be taking on a major component of the risk in scientific research by playing the role of investor; instead, it seems to operates more as a consumer, treating projects like products to be purchased only when complete and deemed sufficiently impactful. In addition to implicitly encouraging investigators to flout rules like that above, this shifts most of the risk onto the shoulders of investigator, who must use her existing funds to spin the roulette wheel and hope that the projects her lab is engaged in will be both successful and yield interesting answers. If she strikes it lucky, there’s a chances of recouping the cost from the NIH. However, if the project is unsuccessful, or successful but produces one of the many not-so-pizzazz-wow answers, the PI’s investment is lost, and at a potentially considerable cost to her career if she’s a new investigator.
Of course one might lessen the charge slightly by observing that it is really the University that is somehow investing in the exploratory work that may eventually become of interest to the buyer. Whether the University then shifts the risk onto the lowly PI is a huge concern, but not inevitable. They could continue to provide seed money, salary, etc to a professor who does not manage to write a funded grant application.
Nevertheless, this is absolutely the right way to look at the ever growing obligation for highly specific Preliminary Data to support any successful grant application. Also the way to look at a study section culture that is motivated in large part by perceived "riskiness" (which underlies a large part of the failure to reward untried investigators from unknown Universities compared with established PIs from coastal elite institutions).
NIH isn't investing in risky science. It is purchasing science once it looks like most of the real risk has been avoided.
I have never seen this so clearly, so thanks to dsks for expressing it.
From the NYT account of the shooting of Dennis Charney:
A former faculty member at the Mount Sinai School of Medicine... , Hengjun Chao, 49, of Tuckahoe, N.Y., was charged with attempted second-degree murder after he allegedly fired a shotgun and hit two men
why? Presumably revenge for :
In October 2002, Mr. Chao joined Mount Sinai as a research assistant professor. He stayed at Mount Sinai until May 2009, when he received a letter of termination from Dr. Charney for “research misconduct,” according to a lawsuit that Mr. Chao filed against the hospital and Dr. Charney, among other parties, in 2010. He went through an appeals process, and was officially terminated in March 2010.
As you might expect, the retraction watch blog has some more fascinating information on this case. One notable bit is the fact that ORI declined to pursue charges against Dr. Chao.
The Office of Research Integrity (ORI) decided not to pursue findings of research misconduct, according to material filed in the case and mentioned in a judge’s opinion on whether Chao could claim defamation by Mount Sinai. Part of Chao’s defamation claim was based on a letter from former ORI investigator Alan Price calling Mount Sinai’s investigation report “inadequate, seriously flawed and grossly unfair in dealing with Dr. Chao.”
Interesting! The institution goes to the effort of firing the guy and manages to fight off a counter suit and ORI still doesn't have enough to go on? Retraction watch posted the report on the Mount Sinai misconduct investigation [PDF]. It makes the case a little more clear.
To briefly summarize: Dr. Chao first alleged that a postdoc, Dr. Cohn, fabricated research data. An investigation failed to support the charge and Dr. Chao withdrew his complaint. Perhaps (?) as part of that review, Dr. Cohn submitted an allegation that Dr. Chao had directed her to falsify data-this was supported by an email and a colleague third-party testimony. Mount Sinai mounted an investigation and interviewed a bunch of people with Dr. titles, some of whom are co-authors with Dr. Chao according to PubMed.
The case is said to hinge on credibility of the interviewees. "There was no 'smoking gun' direct evidence....the allegations..represent the classic 'he-said, she-said' dispute". The report notes that only the above mentioned email trail supports any of the allegations with hard evidence.
Ok, so that might be why ORI declined to pursue the case against Dr. Chao.
The panel found him to be "defensive, remarkably ignorant about the details of his protocol and the specifics of his raw data, and cavalier with his selective memory. ..he made several overbroad and speculative allegations of misconduct against Dr. Cohn without any substantiation"
One witness testified that Dr. Chao had said "[Dr. Cohn] is a young scientist [and] doesn't know how the experiments should come out, and I in my heart know how it should be."
This is kind of a classic sign of a PI who creates a lab culture that encourages data faking and fraud, if you ask me. Skip down to the end for more on this.
There are a number of other allegations of a specific nature. Dropping later timepoints of a study because they were counter to the hypothesis. Publishing data that dropped some of the mice for no apparent reason. Defending low-n (2!) data by saying he was never trained in statistics, but his postdoc mentor contradicted this claim. And finally, the committee decided that Dr. Chao's original complaint filed against Dr. Cohn was a retaliatory action stemming from an ongoing dispute over science, authorship, etc.
The final conclusion in the recommendations section deserves special attention:
"[Dr. Chao] promoted a laboratory culture of misconduct and authoritarianism by rewarding results consistent with his theories and berating his staff if the results were inconsistent with his expectations."
This, my friends, is the final frontier. Every time I see a lower-ling in a lab busted for serial faking, I wonder about this. Sure, any lab can be penetrated by a data faking sleaze. And it is very hard to both run a trusting collaborative scientific environment and still be 100 percent sure of preventing the committed scofflaws. But...but..... I am here to tell you. A lot of data fraud flows from PIs of just exactly this description.
If the PI does it right, their hands are entirely clean. Heck, in some cases they may have no idea whatsoever that they are encouraging their lab to fake data.
But the PI is still the one at fault.
I'd hope that every misconduct investigation against anyone below the PI level looks very hard into the culture that is encouraged and/or perpetrated by the PI of the lab in question.
A question and complaint from commenter musclestumbler on a prior thread introduces the issue.
So much oxygen is sucked up by the R01s, the med schools, etc. that it tends to screw over reviews for the other mechanisms. I look at these rosters, then look at the comments on my proposals, and it's obvious that the idea of doing work without a stable of postdocs and a pool of exploitable Ph.D. students is completely alien and foreign to them.
I personally go after R15 and R03 mechanisms because that's all that can be reasonably obtained at my university. ... Postdocs are few and far between. So we run labs with undergrads and Masters students. Given the workload expectations that we have in the classroom as well as the laboratory, the R15 and R03 mechanisms support research at my school. Competing for an R01 is simply not in the cards for the productivity level that we can reasonably pursue...
This isn't simply fatalism, this is actual advice given by multiple program officers and at workshops. These mechanisms are in place to facilitate and foster our research. Unfortunately, these are considered and reviewed by the same panels that review R01s. We are not asking that they create an SEP for these mechanisms - a "little kids table" if you will - but that the panels have people with these similar institutions on them. I consider it a point of pride that my R15 is considered by the same reviewers that see the R01s, and successfully funded as well.
The point is that, the overwhelming perception and unfortunate reality is that many, many, many of the panelists have zero concept of the type of workload model under which I am employed. And the SROs have a demonstrably poor track record of encouraging institutional diversity. Sure, my panel is diverse- they have people from a medical school, an Ivy League school, and an endowed research institution on the West Coast. They have Country, and Western!
I noted the CSR webpage on study section selection says:
Unique characteristics of study sections must be factored into selection of members. The breadth of science, the multidisciplinary or interdisciplinary nature of the applications, and the types of applications or grant mechanisms being reviewed play a large role in the selection of appropriate members.
It seems very much the case to me that if R15s are habitually being reviewed in sections without participation of any reviewers from R15-eligible institutions, this is a violation of the spirit of this clause.
I suggested that this person should bring this up with their favorite SROs and see what they have to say. I note that now that there is a form for requesting "appropriate expertise" when you submit your NIH grant, it may also be useful to use this to say something about R15-eligible reviewers.
But ultimately we come to the "mercy of the court" aspect of this issue. It is my belief that while yes, the study section is under very serious constraints these days, it is still a human behavior that occasionally lets real humans make rational decisions. Sometimes, reviewers may go for something that is outside of the norm. Outside of the stereotype of what "has" to be in the proposal of this type. Sometimes, reviewers may be convinced by the peculiarities of given situation to, gasp, give you a break. So I suggested the following for this person who had just indicated that his/her R15s do perfectly well in a study section that they think would laugh off their R01 application.
I think this person should try a trimmed down R01 in this situation. Remember the R01 is the most flexible in terms of scope- there is no reason you cannot match it to the budget size of any of the other awards. The upside is that it is for up to five years, better than AREA/R15 (3 y) or R03 (2 y). It is competitively renewable, which may offer advantages. It is an R01, which, as we are discussing in that other thread, may be the key to getting treated like a big kid when it comes to study section empanelment.
The comments from musclestubmler make it sound as if the panels can actually understand the institutional situation, just so long as they are focused on it by the mechanism (R15). The R15 is $100K direct for three years, no? So why not propose an R01 for $100K direct for five years? or if you, Dear Reader, are operating at an R03 level, ask for $50K direct or $75K direct. And I would suggest that you don't just leave this hidden in the budget, sprinkle wording throughout everywhere that refers to this being a go-slow but very inexpensive (compared to full mod) project.
Be very clear about your time commitment (summers only? fine, just make it clear) and the use of undergrads (predict the timeline and research pace) in much the same way you do for an R15 but make the argument for a longer term, renewable R01. Explain why you need it for the project, why it is justified and why a funded version will be productive, albeit at a reduced pace. See if any reviewers buy it. I would.
Sometimes you have to experiment a little with the NIH system. You'd be surprised how many times it works in ways that are not exactly the stereotypical and formal way things are supposed to work.
The other lesson to be drawn from recent political events and applied to science careers is not to let toxic personalities drive the ship.
Yes this means not giving them control over anything that is really important.
But it also means not letting them control you to the extent you are reacting to them, more than doing your thing.
It applies to grant and paper revisions. It applies to the science you do, how you do it and who you choose to work with.
It means you need to wall the toxic actors off in their own little silo, only dealing with them at need or desire.
The Ramirez Group is practicing open grantsmanship by posting "R01 Style" documents on a website. This is certainly a courageous move and one that is unusual for scientists. It is not so long ago that mid-to-senior level Principal Investigator types were absolutely dismayed to learn that CRISP, the forerunner to RePORTER, would hand over their funded grants' abstract to anyone who wished to see it.
There are a number of interesting things here to consider. On the face of it, this responds to a plea that I've heard now and again for real actual sample grant materials. Those who are less well-surrounded by grant-writing types can obviously benefit from seeing how the rather dry instructions from NIH translate into actual working documents. Good stuff.
As we move through certain changes put in place by the NIH, even the well experienced folks can benefit from seeing how one person chooses to deal with the Authentication of Resources requirement or some such. Budgeting may be helpful for others. Ditto the Vertebrate Animals section.
There is the chance that this will work as Open Pre-Submission Peer Review for the Ramirez group as well. For example, I might observe that referring to Santa Cruz as the authoritative proof of authentic antibodies may not have the desired effect in all reviewers. This might then allow them to take a different approach to this section of the grant, avoiding the dangers of a reviewer that "heard SC antibodies are crap".
But there are also drawbacks to this type of Open Science. In this case I might note that posting a Vertebrate Animals statement (or certain types of research protocol description) is just begging the AR wackaloons to make your life hell.
But there is another issue here that I think the Readers of this blog might want to dig into.
As I am wont to observe, the chances are high in the empirical sciences that if you have a good idea, someone else has had it as well. And if the ideas are good enough to shape into a grant proposal, someone else might think these thoughts too. And if the resulting application is a plan that will be competitive, well, it will have been shaped into a certain format space by the acquired wisdom that is poured into a grant proposal. So again, you are likely to have company.
Finally, we all know that the current NIH game means that each PI is submitting a LOT of proposals for research to the NIH.
All of this means that it is likely that if you have proposed a 5 year plan of research to the NIH someone else has already, or will soon, propose something that is a lot like it.
This is known.
It is also known that your chances of bringing your ideas to fruition (published papers) are a lot higher if you have grant support than if you do not. The other way to say this is that if you do not happen to get funded for this grant application, the chances that someone else will publish papers related to your shared ideas is higher.
In the broader sense this means that if you do not get the grant, the record will be less likely to credit you for having those ideas and brilliant insights that were key to the proposal.
So what to do? Well, you could always write Medical Hypotheses and review papers, sure. But these can be imprecise. They describe general hypotheses and predictions but....that's about all.
It would be of more credit to you to lay out the way that you would actually test those hypotheses, is it not? In all of the brilliant experimental design elegance, key controls and fancy scientific approaches that are respected after the fact as amazing work. Maybe even with a little bit of preliminary evidence that you are on the right track, even if that evidence is far too limited to ever be published.
Enter the Open Grantsmanship ploy.
It is genius.
For two reasons.
First, of course, is pure priority claiming. If someone else gets "your" grant and publishes papers, you get to go around whining that you had the idea first. Sure, many people do this but you will have evidence.
Second, there is the subtle attempt to poison the waters for those other competitors' applications. If you can get enough people in your subfield reading your Open Grant proposals then just maaaaaybe someone on a grant panel will remember this. And when a competing proposal is under review just maaaaaaybe they will say "hey, didn't Ramirez Group propose this? maybe it isn't so unique.". Or maybe they will be predisposed to see that your approach is better and downgrade the proposal that is actually under review* accordingly. Perhaps your thin skin of preliminary data will be helpful in making that other proposal look bad. Etc.
*oh, it happens. I have had review comments on my proposals that seemed weird until I became aware of other grant proposals that I know for certain sure couldn't have been in the same round of review. It becomes clear in some cases that "why didn't you do things this way" comments are because that other proposal did indeed do things that way.
This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.
ProTip:Just because you resubmit to a new journal doesn't mean it will be a new reviewer! Fix what was pointed out in prior rejection. @NYAS
— Gretchen Neigh (@GretchenNeigh) April 14, 2016
There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.
It's the sort of thing you only need to do as a response to review when it has a chance of acceptance.
Outrageous claims that are going to be bait for any reviewer? Sure, back those down.
I immediately thought of scientific manuscript review and the not-unusual request to have a revision "thoroughly edited by a native English speaker". My confirmation bias suggests that this is way more common when the first author has an apparently Asian surname.
It would be interesting to see a similar balanced test for scientific writing and review, wouldn't it?
My second thought was.... Ginther. Is this not another one of the thousand cuts contributing to African-American PIs' lower success rates and need to revise the proposal extra times? Seems as though it might be.