Archive for the 'Grant Review' category

Expertise versus consistency

Nov 24 2014 Published by under Grant Review, NIH, NIH funding

In NIH grant review the standing study section approach to peer review sacrifices specific expertise for the sake of consistency of review.

When each person has 10 R01s to review, the odds are that he or she is not the most specifically qualified person for all 10 are high.

The process often brings in additional panel members to help cover scientific domains on a per-meeting basis but this is only partially effective.

The Special Emphasis Panel can improve on this but mostly it does so because the scope of the applications under review is narrower. Typically the members of an SEP still have to stretch a bit to review some of the assignments.

Specific expertise sounds good but can come at the cost of consistency. Score calibration is a big deal. You should have seen the look of horror on my face at dinner following my first study section when some guy said "I thought I was giving it a really good a
score...you guys are telling me that wasn't fundable?"

Imagine a study section with a normal sized load of apps in which each reviewer completes only one or two reviews. The expertise would be highly customized on each proposal but there might be less consistency and calibration across applications.

What say you, Dear Reader? How would you prefer to have your grants reviewed?

18 responses so far

Mobility of Disparate Scores in NIH Grant Review

Nov 07 2014 Published by under Fixing the NIH, Grant Review

If asked to pick the top two good things that I discovered about grant review when I first went for a study section stint, I'd have an easy time. The funny thing is that they come from two diametrically opposed directions.

The first amazing good thing about study section is the degree to which three reviewers of differing subdiscipline backgrounds, scientific preferences and orientations agree. Especially in your first few study section meetings there is little that is quite as nerve-wracking as submitting your initial scores and waiting to see if the other two reviewers agreed with you. This is especially the case when you are in the extreme good or bad end of the scoring distribution.

What I usually found was that there was an amazingly good amount of agreement on overall impact / priority score. Even when the apparent sticking points / points of approbation were different across all three reviewers.

I think this is a strong endorsement that the system works.

The second GoodThing I experienced in my initial service on a study section was the fact that anyone could call a grant up off the triage pile for discussion. This seemed to happen very frequently, again in my initial experiences, when there were significantly different scores. In today's scoring parlance, think if one or two reviewers were giving 1s and 2s and the other reviewer was giving a 5. Or vice versa. The point being to consider the cases where some reviewers are voting a triage score and some are voting a "clearly we need to discuss this" score. In the past, these were almost always called up for discussion. Didn't matter if the "good" scores were 2 to 1 or 1 to 2.

Now admittedly I have no CSR-wide statistics. It could very well be that what I experienced was unique to a given study section's culture or was driven by an SRO who really wanted widely disparate scores to be resolved.

My perception is that this no longer happens as often and I think I know why. Naturally, the narrowing paylines may make reviewers simply not care so much. Triage or a 50 score..or even a 40 score. Who cares? Not even close to the payline so let's not waste time, eh? But there is a structural issue of review that has squelched the discussion of disparate preliminary-score proposals.

For some time now, grants have been reviewed in the order of priority score. With the best-scoring ones being take up for discussion first. In prior years, the review order was more randomized with respect to the initial scores. My understanding was the proposals were grouped roughly by the POs who were assigned to them so that the PO visits to the study section could be as efficient as possible.

My thinking is that when an application was to be called up for review in some random review position throughout the 2-day meeting, people were more likely to do so. Now, when you are knowingly saying "gee, let's tack on a 30-40 min discussion to the end of day 2 when everyone is eager to make an earlier flight home to see their kids"...well, I think there is less willingness to resolve scoring disparity.

I'll note that this change came along with the insertion of individual criterion scores into the summary statement. This permitted applicants to better identify when reviewers disagreed in a significant way. I mean sure, you could always infer differences of opinion from the comments without a number attached but this makes it more salient to the applicant.

Ultimately the reasons for the change don't really matter.

I still think it a worsening of the system of NIH grant review if the willingness of review panels to resolve significant differences of opinion has been reduced.

29 responses so far

Your Grant in Review: Follow the Reviewers' Style Guide

Oct 27 2014 Published by under Grant Review, Grantsmanship, NIH Careerism, NIH funding

The NIH grant application has a tremendous amount of room for stylistic choice. No, I'm not talking about Georgia font again, nor your points-leaving choice to cite your references with numbers instead of author-date.

Within the dictated structure of Aims, Significance, Innovation, etc, there is a lot of freedom.

Where do I put the Preliminary Data now that there is no defined section? What comes first in the Approach- Aim 1? The Timeline? A bunch of additional rationale/background? Do you start every Aim with a brief Rationale and then list a bunch of Experiments? Which methods are "general" enough to put them at the end of Aim 3?

Do I include Future Directions?

What about discussion of Possible Pitfalls and Alternate Considerations and all that jazz?

Is the "Interpretation" for each Aim supposed to be an extensive tretise on results that you don't even have yet?

In all of this there is one certainty.

Ideally you are submitting multiple applications to a single study section over time. If not that, then you are likely submitting a revised version of an application that was not funded to the same study section that reviewed it in the first place. Study sections tend to have an evolved and transmissible culture that changes only slowly. There is a tendency for review to focus (overfocus, but there you have it) on certain structural expectations, in part as a way to be fair* to all the applications. There is a tendency for the study section to be the most comfortable with certain of these optional, stylistic features of a grant application included in juuuust the way that they expect.

So, and here is the certainty, if a summary statement suggests your application is deficient in one of these stylistic manners just suck it up and change your applications to that particular study section accordingly.

Is a Timeline silly when you've laid out a very simple and time-estimated set of experiments in a linear organization throughout the Aims? Perhaps. Is it idiotic to talk about alternatives when you conduct rapid, vertically ascending eleventy science and everything you propose right now is obsolete by the time Year 2 funds? Likely. Why do you need to lead the reviewers by the hand when your Rationale and experimental descriptions make it clear how the hypothesis will be tested and what it would mean? Because.

So when your summary statement suggests a stylistic variant that you wouldn't otherwise prefer...just do it.
__
Additional Your Grant in Review posts.

*If the section has beaten up several apps because they did not appropriately discuss the Possible Pitfalls, or include Future Directions, well, they have to do it for all the apps. So the tendency goes anyway.

59 responses so far

CSR says that applications are up by 10%

Oct 02 2014 Published by under Grant Review, NIH, NIH Careerism

Again, according to the Peer Review Notes for September 2014, CSR of the NIH says that applications are up by 10%.

“Total numbers of applications going to CSR study sections have surged about 14 percent," said CSR Director Dr. Richard Nakamura. “The NIH Office of Extramural Research reports about a 10 percent increase in research project grant applications across NIH.”

The difference in the two number is because "CSR is reviewing a slightly larger portion of NIH applications (79%) now than before.", the balance are reviewed in study sections managed by ICs themselves.

Why the bump in applications?

The CSR appears to be blaming this slight increase on the revision of the policy regarding resubmitting previously unfunded applications. As you know, if your revised version (A1) of a proposal is not funded, you may now resubmit it as a "new" application, making no mention whatever of the fact it was previously reviewed.

“It’s clear a large part of this increase is due to NIH removing limits on resubmitting the same research idea,” he said. “The new policy was designed to keep alive worthy ideas that would have been funded had the NIH budget kept up with inflation.”

Obviously, success rates will go down since I see very little chance the budget is going to increase any time soon. The only possible bright spot would be if the recent award of BRAIN Initiative largesse frees up the regular funds within the general framework of neuroscience that would otherwise be won by these folks. I am not holding my breath on that one.

This bump is hitting around the time of the beginning of the fiscal year when there is no appropriation from Congress, as usual. So grants submitted in the summer (first possible after the policy change) will be reviewed this fall and sent to Advisory Councils in January. My assumption is that we will still be under a Continuing Resolution and many ICs will be conservative, per their usual practice.

So anyone who has proposals in for the upcoming Council round has a bit of extra stress ahead. Tougher competition at review and uncertainty of funding all the way through Council. Probably start hearing news about scores on the bubble in March if we're lucky.

The really interesting question is whether this is a sustained trend (and does it really have anything to do with the A2asA0 policy shift).

I bet it will be short lived, IF it has anything to do with that change. Maybe just that one round or at best two rounds and we'll have cleared out that initial exuberance, is my prediction.

20 responses so far

Sometimes CSR just kills me

Oct 01 2014 Published by under Fixing the NIH, Grant Review, NIH

The Peer Review Notes for September 2014 contains a list of things you should never write when reviewing grants.

Some of them are what we might refer to as Stock Critique type of statements. Meaning that they don't just appear occasionally during review. They are seen constantly. A case in point:

7. “This R21 application does not have pilot data, which should be provided to ensure the success of the project.”

Which CSR answers with:

R21s are exploratory projects to collect pilot data. Preliminary data are not required, although they can be evaluated if provided.

What kind of namby-pamby response is this? They know that the problem with R21s is that reviewers insist they should have preliminary data or, at the least only give good scores to the applications that have strong preliminary data. They bother to put this up on their monthly notes but do NOTHING that will have any effect. Here's my proposed response: "We have noticed reviewers simply cannot refrain from prioritizing preliminary data on R21s so we will be forbiding applicants from including it". Feel free to borrow that, Dr. Nakamura.

Another one:

“This is a fishing expedition.”

CSR:

It would be better if you said the research plan is exploratory in nature, which may be a great thing to do if there are compelling reasons to explore a specific area. Well-designed exploratory or discovery research can provide a wealth of knowledge.

This is another area of classic stock criticism of the type that may, depending on your viewpoint, interfere with getting the desired result. As indicated by the answer, CSR (and therefore NIH) disagrees with this anti-discovery criticism as a general position. Given how prevalent it is, again, I'd like to see something stronger here instead of an anemic little tut-tut.

One of these is really good and a key reminder.

“The human subject protection section does not spell out the specifics, but they already got the IRB approval, and therefore, it is ok.”

Response:


IRB approval is not required at this stage, and it should not be considered to replace evaluation of the protection plans.

And we can put IACUC in there too. Absolutely. There is a two tiered process here which should be independent. Grant reviewers take a whack at the proposed subject protections and then the local IACUC takes a whack at the protocol associated with any funded research activities. It should be a semi-independent process in which neither assumes that the approval from the other side of the review relieves it of responsibility.

Another one is a little odd and may need some discussion.

“This application is not in my area of expertise . . . “

I find that reviewers say this in discussion but have never seen it in a written critique (even during read phase before SRO edits eliminate such statements).

The response is not incorrect...

If you’re assigned an application you feel uncomfortable reviewing, you should tell your Scientific Review Officer as soon as possible before the meeting.

...but I think there is wiggle room here. Sometimes, reviewers are specifying that they are only addressing the application in a particular way. This is OKAY! In my experience it is rare that a given application has three reviewers who are stone cold experts in every single aspect of the proposal. The idea is that they can be primary experts in some part or another. And, interestingly given the recent statements we discussed from Dr. McKnight, it is also okay if someone is going at the application from a generalist perspective as well. So I think for the most part reviewers say this sort of thing as a preamble to boxing off their areas of expertise. Which is important for the other panel members who were not assigned to the application to understand.

4 responses so far

Ask the DM Blog Braintrust: Advice on First Study Section?

Sep 24 2014 Published by under Grant Review, NIH Careerism

A query came it that is best answered by the commentariat before I start stamping around scaring the fish.

I'm a "newbie" heading to study section as an ESR quite soon...
I'd really, really appreciated it if you could do a post on

a) your advice on what to expect and how to ... not put my foot in my mouth

b) what in an ideal world you'd like newbies to achieve as SS members

Thoughts?

32 responses so far

Your Grant in Review: Longitudinal Human Studies

Sep 22 2014 Published by under Grant Review, Grantsmanship, NIH funding

Man.

Reviewing a competing continuation of a longitudinal human subjects study always has a little bit of a whiff of extortion to it. I'm not saying this is intentional but......

 

The sunk cost fallacy is a monster.

3 responses so far

Your Grant in Review: When they aren't talking to you.

Aug 22 2014 Published by under Grant Review, Grantsmanship, NIH Careerism

It is always good to remember that sometimes comments in the written critique are not directed at the applicant.

Technically, of course these comments are directed at Program Staff in an advisory capacity. Not to help the applicant in any way whatsoever- assistance in revising is a side effect.

Still a comment that opposes a Stock Criticism is particularly likely to be there for the consumption of either Program or the other reviewers.

It is meant to preempt the Stock Criticism when the person making the comment lies the grant.

12 responses so far

Your Grant in Review Reminder: Research Study Sections First

Aug 22 2014 Published by under Grant Review, Grantsmanship, NIH Careerism, NIH funding

One key to determining the right study section to request is to look on RePORTER for funded grants reviewed in your study sections of interest.

Sometimes this is much more informative than the boilerplate description of the study section listed at CSR.

8 responses so far

Peer Review: Advocates and Detractors Redux

A comment on a recent post from Grumble is a bit of key advice for those seeking funding from the NIH.

It's probably impossible to eliminate all Stock Critique bait from an application. But you need to come close, because if you don't, even a reviewer who likes everything else about your application is going to say to herself, "there's no way I can defend this in front of the committee because the other reviewers are going to bring up all these annoying flaws." So she won't even bother trying. She'll hold her fire and go all out to promote/defend the one application that hits on most cylinders and proposes something she's really excited about.

This is something that I present as an "advocates and detractors" heuristic to improving your grant writing, surely, but it applies to paper writing/revising and general career management as well. I first posted comments on Peer Review: Friends and Enemies in 2007 and reposted in 2009.


The heuristic is this. In situations of scientific evaluation, whether this be manuscript peer-review, grant application review, job application or the tenure decision, one is going to have a set of advocates in favor of one's case and detractors who are against. The usual caveats apply to such a strict polarization. Sometimes you will have no advocates, in which case you are sunk anyway so that case isn't worth discussing. The same reviewer can simultaneously express pro and con views but as we'll discuss this is just a special case.

The next bit in my original phrasing is what Grumble is getting at in the referenced comment.


Give your advocates what they need to go to bat for you.

This is the biggie. In all things you have to give the advocate something to work with. It does not have to be overwhelming evidence, just something. Let's face it, how many times are you really in position in science to overwhelm objections with the stupendous power of your argument and data to the point where the most confirmed critic cries "Uncle". Right. Never happens.

The point here is that you need not put together a perfect grant, nor need you "wait" until you have X, Y or Z bit of Preliminary Data lined up. You just have to come up with something that your advocates can work with. As Grumble was pointing out, if you give your advocate a grant filled with StockCritique bait then this advocate realizes it is a sunk cause and abandons it. Why fight with both hands and legs trussed up like a Thanksgiving turkey?

Let's take some stock critiques as examples.

"Productivity". The goal here is not to somehow rush 8 first author papers into press. Not at all. Just give them one or two more papers, that's enough. Sometimes reiterating the difficulty of the model or the longitudinal nature of the study might be enough.

"Independence of untried PI with NonTenureTrackSoundin' title". Yes, you are still in the BigPIs lab, nothing to be done about that. But emphasize your role in supervising whole projects, running aspects of the program, etc. It doesn't have to be meticulously documented, just state it and show some sort of evidence. Like your string of first and second authorships on the papers from that part of the program.

"Not hypothesis driven". Sure, well sometimes we propose methodological experiments, sometimes the outcome is truly a matter of empirical description and sometimes the results will be useful no matter how it comes out so why bother with some bogus bet on a hypothesis? Because if you state one, this stock critique is de-fanged, it is much easier to argue the merits of a given hypothesis than it is the merits of the lack of a hypothesis.

Instead of railing against the dark of StockCriticism, light a tiny candle. I know. As a struggling newb it is really hard to trust the more-senior colleagues who insist that their experiences on various study sections has shown that reviewers often do go to bat for untried investigators. But....they do. Trust me.

There's a closely related reason to brush up your application to avoid as many obvious pitfalls as possible. Because it takes ammunition away from your detractors, which makes the advocates job easier.


Deny your detractors grist for their mill.

Should be simple, but isn't. Particularly when the critique is basically a reviewer trying to tell you to conduct the science the way s/he would if they were the PI. (An all to common and inappropriate approach in my view) If someone wants you to cut something minor out, for no apparent reason (like say the marginal cost of doing that particular experiment is low), just do it. Add that extra control condition. Respond to all of their critiques with something, even if it is not exactly what the reviewer is suggesting; again your ultimate audience is the advocate, not the detractor. Don't ignore anything major. This way, they can't say you "didn't respond to critique". They may not like the quality of the response you provide, but arguing about this is tougher in the face of your advocating reviewer.

This may actually be closest to the core of what Grumble was commenting on.

I made some other comments about the fact that a detractor can be converted to an advocate in the original post. The broader point is that an entire study section can be gradually converted. No joke that with enough applications from you, you can often turn the tide. Either because you have argued enough of them (different reviewers might be assigned over time to your many applications) into seeing science your way or because they just think you should be funded for something already. It happens. There is a "getting to know you" factor that comes into play. Guess what? The more credible apps you send to a study section, the more they get to know you.

Ok, there is a final bit for those of you who aren't even faculty yet. Yes, you. Things you do as a graduate student or as a postdoc will come in handy, or hurt you, when it comes time to apply for grants as faculty. This is why I say everyone needs to start thinking about the grant process early. This is why I say you need to start talking with NIH Program staff as a grad student or postdoc.


Plan ahead

Although the examples I use are from the grant review process, the application to paper review and job hunts are obvious with a little thought. This brings me to the use of this heuristic in advance to shape your choices.

Postdocs, for example, often feel they don't have to think about grant writing because they aren't allowed to at present, may never get that job and if they do they can deal with it later. This is an error. The advocate/detractor heuristic suggests that postdocs make choices to expend some effort in broad range of areas. It suggests that it is a bad idea to gamble on the BIG PAPER approach if this means that you are not going to publish anything else. An advocate on a job search committee can work much more easily with the dearth of Science papers than s/he can a dearth of any pubs whatsoever!

The heuristic suggests that going to the effort of teaching just one or two courses can pay off- you never know if you'll be seeking a primarily-teaching job after all. Nor when "some evidence of teaching ability" will be the difference between you and the next applicant for a job. Take on that series of time-depleting undergraduate interns in the lab so that you can later describe your supervisory roles in the laboratory.

This latter bit falls under the general category of managing your CV and what it will look like for future purposes.

Despite what we would like to be the case, despite what should be the case, despite what is still the case in some cozy corners of a biomedical science career....let us face some facts.

  • The essential currency for determining your worth and status as a scientist is your list of published, peer reviewed contributions to the scientific literature.
  • The argument over your qualities between advocates and detractors in your job search, promotions, grant review, etc is going to boil down to pseudo quantification of your CV at some point
  • Quantification means analyzing your first author / senior author /contributing author pub numbers. Determining the impact factor of the journals in which you publish. Examining the consistency of your output and looking for (bad) trends. Viewing the citation numbers for your papers.
  • You can argue to some extent for extenuating circumstances, the difficulty of the model, the bad PI, etc but it comes down to this: Nobody Cares.

My suggestion is, if you expect to have a career you had better have a good idea of what the standards are. So do the research. Do compare your CV with those of other scientists. What are the minimum criteria for getting a job / grant / promotion / tenure in your area? What are you going to do about it? What can you do about it?

This echos something Odyssey said on the Twitts today:

and

are true for your subfield stage as well as your University stage of performance.

6 responses so far

Older posts »