Everyone who is more approving or lenient than you are is an incompetent moron.
Everyone that is harsher or less enthusiastic is a total jackhole.
Everyone who is more approving or lenient than you are is an incompetent moron.
Everyone that is harsher or less enthusiastic is a total jackhole.
The NIH has notified us (NOT-OD-15-024) that as of Jan 25, 2015 all grant applications will have to use the new Biosketch format (sample Word docx).
[ UPDATE 12/05/14: The deadline has been delayed to apply to applications submitted after May 25, 2015 ]
The key change is Section C: Contribution to Science, which replaces the previous list of 15 publications.
C. Contribution to Science
Briefly describe up to five of your most significant contributions to science. For each contribution, indicate the historical background that frames the scientific problem; the central finding(s); the influence of the finding(s) on the progress of science or the application of those finding(s) to health or technology; and your specific role in the described work. For each of these contributions, reference up to four peer-reviewed publications or other non-publication research products (can include audio or video products; patents; data and research materials; databases; educational aids or curricula; instruments or equipment; models; protocols; and software or netware) that are relevant to the described contribution. The description of each contribution should be no longer than one half page including figures and citations. Also provide a URL to a full list of your published work as found in a publicly available digital database such as SciENcv or My Bibliography, which are maintained by the US National Library of Medicine.
The only clear win that I see here is for people who contribute to science in a way that is not captured in the publication record. This is captured by the above suggestions of non-publication products which previously had no place other than the Personal Statement. I see this as a good move for those who fall into this category.
For the regular old run-of-the-mill Biosketches, I am not certain this addresses any of the limitations of the prior system. And it clearly hurts in a key way.
One danger I see lying ahead is that the now-necessary bragging about significant contributions may trigger 1) arguments over the validity of the claim and 2) ill will about the almost inevitable overshadowing of the other people who also made related contributions. The example biosketch leads with a claim to having "changed the standards of care for addicted older adults". This is precisely the sort of claim that is going to be argumentative. There is no way that a broad sweeping change of clinical care rests on the work of one person. No way, no how.
If the Biosketch says "we're one of twenty groups who contributed...", well, this is going to look like you are a replaceable cog. Clearly you can't risk doing that. So you have risks ahead of you in trying to decide what to claim.
The bottom line here is that you are telling reviewers what they are supposed to think about your pubs, whereas previously they simply made their own assumptions. It has upside for the reviewer who is 1) positively disposed toward the application and 2) less familiar with your field but man......it really sets up a fight.
Another thing I notice is the swing of the pendulum. Some time ago, publications were limited to 15 which placed a high premium on customizing the Biosketch to the specific application at hand. This swings back in the opposite direction because it asks for Contribution to Science not Contribution to the Relevant Subfield. The above mentioned need to brag about unique awesomeness also shifts the emphasis to the persons entire body of work rather than that work that is most specific to the project at hand. On this factor, I am of less certain opinion about the influence on review.
Things that I will be curious to see develop.
GlamourMag- It will be interesting to see how many people say, in essence, that such and such was published in a high JIF journal so therefore it is important.
Citations and Alt-metrics- Will people feel it necessary to defend the claims to a critical contribution by pointing out how many citations their papers have received? I think this likely. Particularly since the "non-publication research products" have no conventional measures of impact, people will almost have to talk about downloads of their software, Internet traffic hits to their databases, etc. So why not do this for publications as well, eh?
Figures- all I can say is "huh"?
Sally Rockey reports on the pilot study they conducted with this new Biosketch format.
While reviewers and investigators had differing reactions to the biosketch, a majority of both groups agreed that the new biosketch was an improvement over the old version. In addition, both groups felt that the new format helped in the review process. Both applicants and reviewers expressed concerns, however, about the suitability of the new format for new investigators, but interestingly, investigators who were 40 years and older were more negative than those below age 40.
So us old folks are more concerned about the effects on the young than are the actual young. This is interesting to me since I'm one who feels some concern about this move being bad for less experienced applicants.
I'll note the first few comments posted to Rockey's blog are not enthusiastic about the pilot data.
In NIH grant review the standing study section approach to peer review sacrifices specific expertise for the sake of consistency of review.
When each person has 10 R01s to review, the odds are that he or she is not the most specifically qualified person for all 10 are high.
The process often brings in additional panel members to help cover scientific domains on a per-meeting basis but this is only partially effective.
The Special Emphasis Panel can improve on this but mostly it does so because the scope of the applications under review is narrower. Typically the members of an SEP still have to stretch a bit to review some of the assignments.
Specific expertise sounds good but can come at the cost of consistency. Score calibration is a big deal. You should have seen the look of horror on my face at dinner following my first study section when some guy said "I thought I was giving it a really good a
score...you guys are telling me that wasn't fundable?"
Imagine a study section with a normal sized load of apps in which each reviewer completes only one or two reviews. The expertise would be highly customized on each proposal but there might be less consistency and calibration across applications.
What say you, Dear Reader? How would you prefer to have your grants reviewed?
If asked to pick the top two good things that I discovered about grant review when I first went for a study section stint, I'd have an easy time. The funny thing is that they come from two diametrically opposed directions.
The first amazing good thing about study section is the degree to which three reviewers of differing subdiscipline backgrounds, scientific preferences and orientations agree. Especially in your first few study section meetings there is little that is quite as nerve-wracking as submitting your initial scores and waiting to see if the other two reviewers agreed with you. This is especially the case when you are in the extreme good or bad end of the scoring distribution.
What I usually found was that there was an amazingly good amount of agreement on overall impact / priority score. Even when the apparent sticking points / points of approbation were different across all three reviewers.
I think this is a strong endorsement that the system works.
The second GoodThing I experienced in my initial service on a study section was the fact that anyone could call a grant up off the triage pile for discussion. This seemed to happen very frequently, again in my initial experiences, when there were significantly different scores. In today's scoring parlance, think if one or two reviewers were giving 1s and 2s and the other reviewer was giving a 5. Or vice versa. The point being to consider the cases where some reviewers are voting a triage score and some are voting a "clearly we need to discuss this" score. In the past, these were almost always called up for discussion. Didn't matter if the "good" scores were 2 to 1 or 1 to 2.
Now admittedly I have no CSR-wide statistics. It could very well be that what I experienced was unique to a given study section's culture or was driven by an SRO who really wanted widely disparate scores to be resolved.
My perception is that this no longer happens as often and I think I know why. Naturally, the narrowing paylines may make reviewers simply not care so much. Triage or a 50 score..or even a 40 score. Who cares? Not even close to the payline so let's not waste time, eh? But there is a structural issue of review that has squelched the discussion of disparate preliminary-score proposals.
For some time now, grants have been reviewed in the order of priority score. With the best-scoring ones being take up for discussion first. In prior years, the review order was more randomized with respect to the initial scores. My understanding was the proposals were grouped roughly by the POs who were assigned to them so that the PO visits to the study section could be as efficient as possible.
My thinking is that when an application was to be called up for review in some random review position throughout the 2-day meeting, people were more likely to do so. Now, when you are knowingly saying "gee, let's tack on a 30-40 min discussion to the end of day 2 when everyone is eager to make an earlier flight home to see their kids"...well, I think there is less willingness to resolve scoring disparity.
I'll note that this change came along with the insertion of individual criterion scores into the summary statement. This permitted applicants to better identify when reviewers disagreed in a significant way. I mean sure, you could always infer differences of opinion from the comments without a number attached but this makes it more salient to the applicant.
Ultimately the reasons for the change don't really matter.
I still think it a worsening of the system of NIH grant review if the willingness of review panels to resolve significant differences of opinion has been reduced.
The NIH grant application has a tremendous amount of room for stylistic choice. No, I'm not talking about Georgia font again, nor your points-leaving choice to cite your references with numbers instead of author-date.
Within the dictated structure of Aims, Significance, Innovation, etc, there is a lot of freedom.
Where do I put the Preliminary Data now that there is no defined section? What comes first in the Approach- Aim 1? The Timeline? A bunch of additional rationale/background? Do you start every Aim with a brief Rationale and then list a bunch of Experiments? Which methods are "general" enough to put them at the end of Aim 3?
Do I include Future Directions?
What about discussion of Possible Pitfalls and Alternate Considerations and all that jazz?
Is the "Interpretation" for each Aim supposed to be an extensive tretise on results that you don't even have yet?
In all of this there is one certainty.
Ideally you are submitting multiple applications to a single study section over time. If not that, then you are likely submitting a revised version of an application that was not funded to the same study section that reviewed it in the first place. Study sections tend to have an evolved and transmissible culture that changes only slowly. There is a tendency for review to focus (overfocus, but there you have it) on certain structural expectations, in part as a way to be fair* to all the applications. There is a tendency for the study section to be the most comfortable with certain of these optional, stylistic features of a grant application included in juuuust the way that they expect.
So, and here is the certainty, if a summary statement suggests your application is deficient in one of these stylistic manners just suck it up and change your applications to that particular study section accordingly.
Is a Timeline silly when you've laid out a very simple and time-estimated set of experiments in a linear organization throughout the Aims? Perhaps. Is it idiotic to talk about alternatives when you conduct rapid, vertically ascending eleventy science and everything you propose right now is obsolete by the time Year 2 funds? Likely. Why do you need to lead the reviewers by the hand when your Rationale and experimental descriptions make it clear how the hypothesis will be tested and what it would mean? Because.
So when your summary statement suggests a stylistic variant that you wouldn't otherwise prefer...just do it.
Additional Your Grant in Review posts.
*If the section has beaten up several apps because they did not appropriately discuss the Possible Pitfalls, or include Future Directions, well, they have to do it for all the apps. So the tendency goes anyway.
Again, according to the Peer Review Notes for September 2014, CSR of the NIH says that applications are up by 10%.
“Total numbers of applications going to CSR study sections have surged about 14 percent," said CSR Director Dr. Richard Nakamura. “The NIH Office of Extramural Research reports about a 10 percent increase in research project grant applications across NIH.”
The difference in the two number is because "CSR is reviewing a slightly larger portion of NIH applications (79%) now than before.", the balance are reviewed in study sections managed by ICs themselves.
Why the bump in applications?
The CSR appears to be blaming this slight increase on the revision of the policy regarding resubmitting previously unfunded applications. As you know, if your revised version (A1) of a proposal is not funded, you may now resubmit it as a "new" application, making no mention whatever of the fact it was previously reviewed.
“It’s clear a large part of this increase is due to NIH removing limits on resubmitting the same research idea,” he said. “The new policy was designed to keep alive worthy ideas that would have been funded had the NIH budget kept up with inflation.”
Obviously, success rates will go down since I see very little chance the budget is going to increase any time soon. The only possible bright spot would be if the recent award of BRAIN Initiative largesse frees up the regular funds within the general framework of neuroscience that would otherwise be won by these folks. I am not holding my breath on that one.
This bump is hitting around the time of the beginning of the fiscal year when there is no appropriation from Congress, as usual. So grants submitted in the summer (first possible after the policy change) will be reviewed this fall and sent to Advisory Councils in January. My assumption is that we will still be under a Continuing Resolution and many ICs will be conservative, per their usual practice.
So anyone who has proposals in for the upcoming Council round has a bit of extra stress ahead. Tougher competition at review and uncertainty of funding all the way through Council. Probably start hearing news about scores on the bubble in March if we're lucky.
The really interesting question is whether this is a sustained trend (and does it really have anything to do with the A2asA0 policy shift).
I bet it will be short lived, IF it has anything to do with that change. Maybe just that one round or at best two rounds and we'll have cleared out that initial exuberance, is my prediction.
The Peer Review Notes for September 2014 contains a list of things you should never write when reviewing grants.
Some of them are what we might refer to as Stock Critique type of statements. Meaning that they don't just appear occasionally during review. They are seen constantly. A case in point:
7. “This R21 application does not have pilot data, which should be provided to ensure the success of the project.”
Which CSR answers with:
R21s are exploratory projects to collect pilot data. Preliminary data are not required, although they can be evaluated if provided.
What kind of namby-pamby response is this? They know that the problem with R21s is that reviewers insist they should have preliminary data or, at the least only give good scores to the applications that have strong preliminary data. They bother to put this up on their monthly notes but do NOTHING that will have any effect. Here's my proposed response: "We have noticed reviewers simply cannot refrain from prioritizing preliminary data on R21s so we will be forbiding applicants from including it". Feel free to borrow that, Dr. Nakamura.
“This is a fishing expedition.”
It would be better if you said the research plan is exploratory in nature, which may be a great thing to do if there are compelling reasons to explore a specific area. Well-designed exploratory or discovery research can provide a wealth of knowledge.
This is another area of classic stock criticism of the type that may, depending on your viewpoint, interfere with getting the desired result. As indicated by the answer, CSR (and therefore NIH) disagrees with this anti-discovery criticism as a general position. Given how prevalent it is, again, I'd like to see something stronger here instead of an anemic little tut-tut.
One of these is really good and a key reminder.
“The human subject protection section does not spell out the specifics, but they already got the IRB approval, and therefore, it is ok.”
IRB approval is not required at this stage, and it should not be considered to replace evaluation of the protection plans.
And we can put IACUC in there too. Absolutely. There is a two tiered process here which should be independent. Grant reviewers take a whack at the proposed subject protections and then the local IACUC takes a whack at the protocol associated with any funded research activities. It should be a semi-independent process in which neither assumes that the approval from the other side of the review relieves it of responsibility.
Another one is a little odd and may need some discussion.
“This application is not in my area of expertise . . . “
I find that reviewers say this in discussion but have never seen it in a written critique (even during read phase before SRO edits eliminate such statements).
The response is not incorrect...
If you’re assigned an application you feel uncomfortable reviewing, you should tell your Scientific Review Officer as soon as possible before the meeting.
...but I think there is wiggle room here. Sometimes, reviewers are specifying that they are only addressing the application in a particular way. This is OKAY! In my experience it is rare that a given application has three reviewers who are stone cold experts in every single aspect of the proposal. The idea is that they can be primary experts in some part or another. And, interestingly given the recent statements we discussed from Dr. McKnight, it is also okay if someone is going at the application from a generalist perspective as well. So I think for the most part reviewers say this sort of thing as a preamble to boxing off their areas of expertise. Which is important for the other panel members who were not assigned to the application to understand.
A query came it that is best answered by the commentariat before I start stamping around scaring the fish.
I'm a "newbie" heading to study section as an ESR quite soon...
I'd really, really appreciated it if you could do a post on
a) your advice on what to expect and how to ... not put my foot in my mouth
b) what in an ideal world you'd like newbies to achieve as SS members
Reviewing a competing continuation of a longitudinal human subjects study always has a little bit of a whiff of extortion to it. I'm not saying this is intentional but......
The sunk cost fallacy is a monster.
Does "appropriately ambitious" on an ESI grant mean "ambitious enough" (go big) or "easy there, tiger" (don't overreach)?
— Michael Hendricks (@MHendr1cks) August 22, 2014
It is always good to remember that sometimes comments in the written critique are not directed at the applicant.
Technically, of course these comments are directed at Program Staff in an advisory capacity. Not to help the applicant in any way whatsoever- assistance in revising is a side effect.
Still a comment that opposes a Stock Criticism is particularly likely to be there for the consumption of either Program or the other reviewers.
It is meant to preempt the Stock Criticism when the person making the comment lies the grant.