At one institute ... they wouldn't even report what NRSA scores were fundable, just who got the grants and who didn't...Yeah, that's scientific all right.
remind me that one of the apparently less-obvious things about the selection of NIH grants for funding is that the study section score is only ONE part of the puzzle. I wrote about one of the other main influences on selection for funding in an entry posted to the old blog March 5, 2007. It seemed worth discussing again.
Discussion of the abysmal "funding line" among researchers is common these days. There are many related topics worthy of discussion but one issue that seems to be universal is a suspicion over the behavior of Program in funding grants that scored outside of the funding line. First, some definitions. "Program" here means the decision apparatus within individual Institutes (NIH is plural) such as NIDA, NIMH, NCI, NIAID, etc, which ranges from the professional administrative staff (Program Official) through the Institute Director advised by a peer group of senior scientists called the Advisory Council ("Council"). There are many ways to define the "funding line" but in most cases people refer to what I call the "hard" line meaning
We should really refer to the payline which is the percentile which ICs plan to use as their most-conservative cutoff for definitely-fund in each round. This is based on the pool of applications submitted to a given study section (in most cases, R01s only, etc) which are usually assigned to more than one IC. So an IC saying that they can fund everything that hits a 9%ile score does not necessarily mean the top 9% of grants assigned to their Institute. This is what PP was objecting to, I believe.]
There are three general submission dates and three Council meetings per year which make up a "round" for funding. At present the line is somewhere around 7-10%ile (Institutes vary, there are changes each submission round), in recent history during the budget "doubling" this number has trended more toward 15%ile or higher. Yet the NIH publishes grant award data which indicate that success rates are much higher, 20% now, around 30% during the "doubling". So what gives?
[Here is where I stand by the intent and underlying meaning of my message with this post. Even if the study-section based percentile is not identical to the same percentile of grants assigned to a given IC, I bet it is a decent proxy. It is certainly a qualitative proxy since the whole point here is that one hardly ever hears of grants not getting funded that score within a published (or even PO bandied-about) payline but one frequently hears of grants getting funded with percentiles that are worse than those of grants one hears of not getting funded (in the same round, at the same Institute). ]
Some of this, of course, is that the NIH stats refer to per-year success meaning that a project which is submitted and revised two rounds later counts as one application. But the real reason is the "soft line" behavior of Program in "reaching down" to "pick up" grants that did not make the hard funding percentile. WHAAAATTTT?? How can a grant which scores worse than mine get funded while mine is passed over?
Programmatic priorities dictate something other than the "best possible science" gets funded all the time. An Institute may decide that any of a whole host of issues are underrepresented in their portfolio for various reasons both internally scientific (i.e., Council recommendations, meeting or symposium discussions (Program attends meetings!), influential reviews, etc) or external (i.e., big media splash on some issue, Congressional "interest" via inquiry, Congressional mandate, etc). The Institute may decide that their portfolio is underrepresented with PIs of various gender, ethnic and geographic descriptions, under/overrepresented with grant mechanisms, New Investigators, etc. The Institute may decide that they "have an investment" in a given research program or resource and choose to keep it running. This really outrages people who fall just off the funding line and don't get their applications "picked up" as you can imagine.
Grow up. This is why the Institutes exist. The notion of pure investigator-initiated science is a good one, but much like "democracy" can't be carried to the extreme. Scientists, and the scientific enterprise, exhibit well discussed conservatism in many ways, see Nature editorial about Nobel-Prize-destined work being passed over. This is unsurprising given that we are human. We have a tendency to understand scientific models and domains that relate to our own work the best. We have a tendency to stick to these models and domains, particularly as our scientific careers mature. This is natural. But it means that the funding of science by the priorities of those doing the science leads to a suppression of innovation and novelty. Not to mention health domain coverage, the interest of the National Institutes of Health.
With that said, there is a problem with Program's behavior in that it is almost perfectly opaque. There is very little way to determine how many grants have been "picked up" at all. Imprecision in the budgeting/prediction/scoreoutcome process means that the number of grants funded in perfect line with the priority scores can vary due to unexpectedly low numbers of high scoring grants per round (percentiling is across three rounds), high scoring grants that meet Program priorities, etc. In any case, Program is very loathe to explain their "pick up" reasoning in specific terms no doubt hoping to avoid lengthy debates, Congressional inquiry and even lawsuits from someone who didn't get funded. On the balance this seems silly. If Program is going to assert a priority, do so honestly and forthrightly. Just say, we picked up X number of women PIs and Y number of New Investigators and Z grants between the Mississippi and the Sierra Nevada! And then explain why. If the reason is good enough to use, it is good enough to defend, no?
UPDATE: Apparently NASA funded scientists have similar issues.
One addendum is that I talked very broadly about "Program" interests. One reason is because, only seeing this from the outside, I have little understanding of how it works. One can pick up things here and there, though. Fairly open, is the fact that pickups can originate with lowly line POs, at Advisory Council or with the Director herself. I've heard mutterings from at least one Council member that they have little actual influence, shrug. Also mutterings (and some fairly convincing evidence from colleagues) that some Directors are very active in the sense of putting general personal priorities into practice by fingering grants.
Less open is the fairly obvious conclusion that ICs are political entities with varying and shifting influences. Some line POs (or more likely the whole Division/Branch unit of several POs) are probably more likely than others to get their pickup proposals approved. Some are more or less likely to engage in active behavior. I have little doubt that there is a calculus for how much they can go to bat for a pickup based on original score- harder to argue for a 240 than a 180, that sort of thing. Rules of thumb trickle out at times, such as "no way we're going to pick up over 200, if you'd only snuck in at a 195...". These are no doubt highly variable over time.
The take home for me on all of this is my usual pleading for PIs to establish and maintain good relationships with Program staff- if they are going to take a little extra effort on a proposal, putting your winning personality to a name on an app cannot possibly hurt. Also why I advise paying close attention to what Program (and particularly the Director!) are saying about funding priorities. Is it mercenary to follow funding priorities, to decide what applications to submit based on what ICs want? Sure, but do you want a career or not?