Archive for the 'NIH' category

The proof is in the budgeting

Dec 17 2014 Published by under NIH, NIH Budgets and Economics

When we last discussed Representative Andy Harris it was in the wake of an editorial he published in the NYT. It consisted of a call to put hard targets on the NIH for reducing the average age of the first R01, standard Golden Fleece style carping about frivolous research projects and a $700M "tap" of the NIH budget. I speculated that this last was the real target because the "tap" is money appropriated to the NIH that then goes to "program evaluation" and the AHRQ. There is the possibility that this is a covert attack on the ACA ("Obamacare").

The recent appropriation to the NIH passed by the Congress is interesting because it addresses these three issues. According to Jocelyn Kaiser at ScienceInsider:

The report also directs NIH to pay more attention to the age at which new NIH investigators receive their first research grant... but lacks that specific target.

So toothless verbiage, but no more.


Lawmakers also address a perennial concern: that the amount NIH spends on specific diseases doesn’t take into account the burden that disease creates or death rates. The report “urges NIH to ensure research dollars are invested in areas in which American lives may be improved.” It also tells NIH “to prioritize Federal funds for medical research over outreach and education,”

"urges". Again, this is totally impotent. Two strikes on Rep Harris.

One recent concern about NIH’s budget—that each year some money is skimmed off for other Department of Health and Human Services (HHS) agencies—is remedied in the bill. It says that the $700 million that NIH is set to contribute to the “tap” this year will come back as $715 million for the agency.

Well that seems like Rep Harris got a win on the tap, no? And his goal was what again?

For one thing, we need to eliminate a budget gimmick, known as the “tap,” that allows the Department of Health and Human Services to shift money out of the N.I.H. budget into other department efforts. The N.I.H. lost $700 million to the “tap” in 2013 alone. Instead, the money should be placed under the control of the N.I.H. director, with an explicit instruction that it go to young investigators as a supplement to money already being spent. If we don’t force the N.I.H. to spend it on young investigators, history has shown that the agency won’t.

"lost". "supplement to money already being spent". This creates the strong impression that Rep Harris was trying to increase the NIH budget by $700M. And yet. The overall NIH appropriation only increased by $150M.

So in point of fact Rep Harris took three strikes.

Or so it appears.

Of course, if his agenda was to go after those agencies that received their support from the tap, perhaps he didn't strike out after all. We'll have to see if those agencies got all their money in this budget and, more importantly, if they remain this way in subsequent years. It is not impossible that breaking the previous recipients of the "tap" down into individual line items in the budget will allow them to be eliminated one by one.

One thing is for sure, Rep Harris didn't do anything concrete to help out the young investigator issue at the NIH in this budget appropriation.

UPDATE: Actually I screwed this up. If there is no net decrease in the budget and the NIH no longer loses $700M to the tap obligations, I guess this is a net gain. My bad.

15 responses so far

McKnight: "Wait whut? There are data? Really? Maybe I'd better cool it..."

Dec 05 2014 Published by under Fixing the NIH, NIH, NIH Careerism

The sidebar to McKight's column at ASBMB Today this month is hilarious.

Author's Note

I’ve decided it’s prudent to take a break from the debate about the quality of reviewers on National Institutes of Health study sections. The American Society for Biochemistry and Molecular Biology governing council met in mid-November with Richard Nakamura, director of the NIH’s Center for Scientific Review. The discussion was enlightening, and the data presented will inform my future columns on this topic.

HAHAHHAA. Clearly it is news to McKnight that his opinions might actually be on topics for which there are data in support or contradiction? And now he has to sit down and go through actual facts to try to come up with a better argument that study sections today are populated with riff-raff who are incompetent to review science.

Never fear though, he still leaves us with some fodder for additional snickering at his....simple-minded thinking. He would like his readers to answer some poll questions...

The first question is:
Should the quality of the proposed research and researcher be the most important criteria dictating whether an NIH-sponsored grant is funded?

The response item is Yes/No so of course some 99% of the responses are going to be Yes. Right? I mean jeepers what a stupid question. Particularly without any sense of what he imagines might be a possible alternative to these two considerations as "the most important criteria". Even more hilariously since he has totally conflated the two things that are actual current items of debate (i.e., project versus person) and tie directly into his two prior columns!

The next question:
The review process used to evaluate NIH grant applications is:

has three possible answers:
essentially perfect with no room for improvement
slightly sub-optimal but impossible to improve
suboptimal with significant room for improvement

Again, simple-minded. Nobody thinks the system is perfect, this is a straw-man argument. I predict that once again, he's going to get most people responding on one option, the "suboptimal, room for improvement" one. This is, again, trivial within the discussion space. The hard questions, as you my Readers know full well, relate to the areas of suboptimality and the proposed space in which improvements need to be made.

What is he about with this? Did Nakamura really tell him that the official CSR position is that everything is hunky-dory? That seems very unlikely given the number of initiatives, pilot studies, etc that they (CSR) have been working through ever since I started paying attention about 7-8 years ago.

Ah well, maybe this is the glimmer of recognition on the part of McKnight that he went off half-cocked without the slightest consideration that perhaps there are actual facts here to be understood first?

17 responses so far

More in "NIH responds to a non-problem by creating a problem"

Dec 05 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

I can't even imagine what they are thinking.

This Notice informs the applicant community of a modification for how NIH would like applicants to mark changes in their Resubmission applications. NIH has removed the requirement to identify 'substantial scientific changes' in the text of a Resubmission application by 'bracketing, indenting, or change of typography'.

Effective immediately, it is sufficient to outline the changes made to the Resubmission application in the Introduction attachment. The Introduction must include a summary of substantial additions, deletions, and changes to the application. It must also include a response to weaknesses raised in the Summary Statement. The page limit for the Introduction may not exceed one page unless indicated otherwise in the Table of Page Limits.

First of all "would like" and "removed the requirement" do not align with each other. If the NIH "would like" that means this is not just a "we don't care whether you do it or not". So why not make it a mandate?

Next up...WHY?

Finally: How in all that is holy do they really expect the applicant to ("must") summarize "substantial additions, deletions, and changes" and to "include a response to weaknesses" in just one page?

I am starting to suspect Rockey is planning on burning the OER down to the ground before leaving for greener pastures.

18 responses so far

Wait...the new Biosketch is supposed to be an antiGlamour measure? HAHAHHAHHA!!!!!

A tweet from @babs_mph sent me back to an older thread where Rockey introduced the new Biosketch concept. One "Senior investigator" commented:

For those who wonder where this idea came from, please see the commentary by Deputy Director Tabak and Director Collins (Nature 505, 612–613, January 2014) on the issue of the reproducibility of results. One part of the commentary suggests that scientists may be tempted to overstate conclusions in order to get papers published in high profile journals. The commentary adds “NIH is contemplating modifying the format of its ‘biographical sketch’ form, which grant applicants are required to complete, to emphasize the significance of advances resulting from work in which the applicant participated, and to delineate the part played by the applicant. Other organizations such as the Howard Hughes Medical Institute have used this format and found it more revealing of actual contributions to science than the traditional list of unannotated publications.”

Here's Collins and Tabak, 2014 in freely available PMC format. The lead in to the above referenced passage is:

Perhaps the most vexed issue is the academic incentive system. It currently overemphasizes publishing in high-profile journals. No doubt worsened by current budgetary woes, this encourages rapid submission of research findings to the detriment of careful replication. To address this, the NIH is contemplating...

Hmmm. So by changing this, the ability on grant applications to say something like:

"Yeah, we got totally scooped out of a Nature paper because we didn't rush some data out before it was ready but look, our much better paper that came out in our society journal 18 mo later was really the seminal discovery, we swear. So even though the entire world gives primary credit to our scoopers, you should give us this grant now."

is supposed to totally alter the dynamics of the "vexed issue" of the academic incentive system.

Right guys. Right.

13 responses so far

The new NIH Biosketch is here

Dec 02 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism, NIH funding

The NIH has notified us (NOT-OD-15-024) that as of Jan 25, 2015 all grant applications will have to use the new Biosketch format (sample Word docx).
[ UPDATE 12/05/14: The deadline has been delayed to apply to applications submitted after May 25, 2015 ]

The key change is Section C: Contribution to Science, which replaces the previous list of 15 publications.

C. Contribution to Science
Briefly describe up to five of your most significant contributions to science. For each contribution, indicate the historical background that frames the scientific problem; the central finding(s); the influence of the finding(s) on the progress of science or the application of those finding(s) to health or technology; and your specific role in the described work. For each of these contributions, reference up to four peer-reviewed publications or other non-publication research products (can include audio or video products; patents; data and research materials; databases; educational aids or curricula; instruments or equipment; models; protocols; and software or netware) that are relevant to the described contribution. The description of each contribution should be no longer than one half page including figures and citations. Also provide a URL to a full list of your published work as found in a publicly available digital database such as SciENcv or My Bibliography, which are maintained by the US National Library of Medicine.

The only clear win that I see here is for people who contribute to science in a way that is not captured in the publication record. This is captured by the above suggestions of non-publication products which previously had no place other than the Personal Statement. I see this as a good move for those who fall into this category.

For the regular old run-of-the-mill Biosketches, I am not certain this addresses any of the limitations of the prior system. And it clearly hurts in a key way.

One danger I see lying ahead is that the now-necessary bragging about significant contributions may trigger 1) arguments over the validity of the claim and 2) ill will about the almost inevitable overshadowing of the other people who also made related contributions. The example biosketch leads with a claim to having "changed the standards of care for addicted older adults". This is precisely the sort of claim that is going to be argumentative. There is no way that a broad sweeping change of clinical care rests on the work of one person. No way, no how.

If the Biosketch says "we're one of twenty groups who contributed...", well, this is going to look like you are a replaceable cog. Clearly you can't risk doing that. So you have risks ahead of you in trying to decide what to claim.

The bottom line here is that you are telling reviewers what they are supposed to think about your pubs, whereas previously they simply made their own assumptions. It has upside for the reviewer who is 1) positively disposed toward the application and 2) less familiar with your field but man......it really sets up a fight.

Another thing I notice is the swing of the pendulum. Some time ago, publications were limited to 15 which placed a high premium on customizing the Biosketch to the specific application at hand. This swings back in the opposite direction because it asks for Contribution to Science not Contribution to the Relevant Subfield. The above mentioned need to brag about unique awesomeness also shifts the emphasis to the persons entire body of work rather than that work that is most specific to the project at hand. On this factor, I am of less certain opinion about the influence on review.

Things that I will be curious to see develop.

GlamourMag- It will be interesting to see how many people say, in essence, that such and such was published in a high JIF journal so therefore it is important.

Citations and Alt-metrics- Will people feel it necessary to defend the claims to a critical contribution by pointing out how many citations their papers have received? I think this likely. Particularly since the "non-publication research products" have no conventional measures of impact, people will almost have to talk about downloads of their software, Internet traffic hits to their databases, etc. So why not do this for publications as well, eh?

Figures- all I can say is "huh"?

Sally Rockey reports on the pilot study they conducted with this new Biosketch format.

While reviewers and investigators had differing reactions to the biosketch, a majority of both groups agreed that the new biosketch was an improvement over the old version. In addition, both groups felt that the new format helped in the review process. Both applicants and reviewers expressed concerns, however, about the suitability of the new format for new investigators, but interestingly, investigators who were 40 years and older were more negative than those below age 40.

So us old folks are more concerned about the effects on the young than are the actual young. This is interesting to me since I'm one who feels some concern about this move being bad for less experienced applicants.

I'll note the first few comments posted to Rockey's blog are not enthusiastic about the pilot data.

69 responses so far

Expertise versus consistency

Nov 24 2014 Published by under Grant Review, NIH, NIH funding

In NIH grant review the standing study section approach to peer review sacrifices specific expertise for the sake of consistency of review.

When each person has 10 R01s to review, the odds are that he or she is not the most specifically qualified person for all 10 are high.

The process often brings in additional panel members to help cover scientific domains on a per-meeting basis but this is only partially effective.

The Special Emphasis Panel can improve on this but mostly it does so because the scope of the applications under review is narrower. Typically the members of an SEP still have to stretch a bit to review some of the assignments.

Specific expertise sounds good but can come at the cost of consistency. Score calibration is a big deal. You should have seen the look of horror on my face at dinner following my first study section when some guy said "I thought I was giving it a really good a
score...you guys are telling me that wasn't fundable?"

Imagine a study section with a normal sized load of apps in which each reviewer completes only one or two reviews. The expertise would be highly customized on each proposal but there might be less consistency and calibration across applications.

What say you, Dear Reader? How would you prefer to have your grants reviewed?

25 responses so far

What do you know, the NIH has not solved the revision-queing, traffic holding pattern problem with grant review.

Nov 14 2014 Published by under Fixing the NIH, NIH, NIH Careerism

Way back in 2008 I expressed my dissatisfaction with the revision-cycle holding pattern that delayed the funding of NIH grants.

Poking through my pile of assignments I find that I have three R01 applications at the A2 stage (the second and "final" amendment of a brand new proposal). Looking over the list of application numbers for the entire panel this round, I see that we have about 15% of our applications on the A2 revision.

Oi. What a waste of everyone's time. I anticipate many reviewers will be incorporating the usual smackdown-of-Program language. "This more than adequately revised application...."

I am not a fan of the NIH grant revision process, as readers will have noticed. Naturally my distaste is tied to the current era of tight budgets and expanding numbers of applications but I think the principles generalize. My main problem is that review panels use the revision process as a way of triaging the review process. This has nothing to do with selecting the most meritorious applications for award and everything to do with making a difficult process easier.

ReviewBiasGraph1The bias for revised applications is supported by funding data, round-after-round outcome in my section as well as supporting anecdotes from my colleagues who review. ... What you will quickly notice is that only about 10% of applications reviewed in normal CSR sections get funded without being revised. ... If you care to step back Fiscal Year by Fiscal Year in the CRISP [RePORTER replaced this- DM] search, you will notice the relative proportions of grants being funded at the unrevised (-01), A1 and A2 stages have trended for more revising in concert with the budget flattening. I provide an example for a single study section here ... you will notice if you review a series of closely related study sections is that the relative "preference" for giving high scores to -01, A1 and A2 applications varies somewhat between sections. This is analysis is perhaps unsurprising but we should be very clear that this does not reflect some change in the merit or value of revising applications; this is putting good applications in a holding pattern.

In the mean time, we've seen the NIH first limit revisions to 1 (the A1 version) for a few years to try to get grants funded sooner, counting from the date of first submission. In other words, to try to get more grants funded un-Amended, colloquially at the -A0 stage. After an initial trumpeting of their "success" the NIH went to silent running on this topic during a sustained drumbeat of complaints from applicants who, apparently, were math challenged and imagined that bringing back the A2 would somehow improve their chances. Then last year the NIH backed down and permitted applicants to keep submitting the same research proposal over and over, although after A1 the clock had to be reset to define the proposal as a "new" or A0 status proposal.

I have asserted all along that this is a shell game. When we were only permitted to submit one amended version, allegedly the same topic could not come back for review in "new" guise. But guess what? It took almost zero imagination to re-configure the Aims and the proposal such that the same approximate research project could be re-submitted for consideration. That's sure as hell what I did, and never ever got one turned back for similarity to a prior A1 application. The return to endless re-submission just allowed the unimaginative in on the game is all.

Type1-2000-2013 graph-2
This brings me around to a recent post over at Datahound. He's updated the NIH-wide stats for A0, A1 and (historically) A2 grants expressed as the proportion of all funded grants across recent years. As you can see, the single study section I collected the data for before both exaggerated and preceded the NIH-wide trends. It was as section that was (apparently) particularly bad about not funding proposals on the first submission. This may have given me a very severe bias..as you may recall, this particular study section was one that I submitted to most frequently in my formative years as a new PI.

It was clearly, however, the proverbial canary in the coalmine.

The new Datahound analysis shows another key thing which is that the traffic-holding, wait-your-turn behavior re-emerged in the wake of the A2 ban, as I had assumed it would. The triumphant data depictions from the NIH up through the 2010 Fiscal Year didn't last and of course those data were generated when substantial numbers of A2s were still in the system. The graph also shows taht there was a very peculiar worsening from 2012-2013 whereby the A0 apps were further disadvantaged, once again, relative to A1 apps which returns us right back to the trends of 2003-2007. Obviously the 2012-2013 interval was precisely when the final A2s had cleared the system. It will be interesting to see if this trend continues even in the face of the endless resubmission of A2asA0 era.

So it looks very much as though even major changes in permissible applicant behavior with respect to revising grants does very little. The tendency of study sections to put grants into a holding pattern and insist on revisions to what are very excellent original proposals has not been broken.

I return to my 2008 proposal for a way to address this problem:


So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding
The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1 / 15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review....a great deal of time and effort would be saved.

11 responses so far

Mobility of Disparate Scores in NIH Grant Review

Nov 07 2014 Published by under Fixing the NIH, Grant Review

If asked to pick the top two good things that I discovered about grant review when I first went for a study section stint, I'd have an easy time. The funny thing is that they come from two diametrically opposed directions.

The first amazing good thing about study section is the degree to which three reviewers of differing subdiscipline backgrounds, scientific preferences and orientations agree. Especially in your first few study section meetings there is little that is quite as nerve-wracking as submitting your initial scores and waiting to see if the other two reviewers agreed with you. This is especially the case when you are in the extreme good or bad end of the scoring distribution.

What I usually found was that there was an amazingly good amount of agreement on overall impact / priority score. Even when the apparent sticking points / points of approbation were different across all three reviewers.

I think this is a strong endorsement that the system works.

The second GoodThing I experienced in my initial service on a study section was the fact that anyone could call a grant up off the triage pile for discussion. This seemed to happen very frequently, again in my initial experiences, when there were significantly different scores. In today's scoring parlance, think if one or two reviewers were giving 1s and 2s and the other reviewer was giving a 5. Or vice versa. The point being to consider the cases where some reviewers are voting a triage score and some are voting a "clearly we need to discuss this" score. In the past, these were almost always called up for discussion. Didn't matter if the "good" scores were 2 to 1 or 1 to 2.

Now admittedly I have no CSR-wide statistics. It could very well be that what I experienced was unique to a given study section's culture or was driven by an SRO who really wanted widely disparate scores to be resolved.

My perception is that this no longer happens as often and I think I know why. Naturally, the narrowing paylines may make reviewers simply not care so much. Triage or a 50 score..or even a 40 score. Who cares? Not even close to the payline so let's not waste time, eh? But there is a structural issue of review that has squelched the discussion of disparate preliminary-score proposals.

For some time now, grants have been reviewed in the order of priority score. With the best-scoring ones being take up for discussion first. In prior years, the review order was more randomized with respect to the initial scores. My understanding was the proposals were grouped roughly by the POs who were assigned to them so that the PO visits to the study section could be as efficient as possible.

My thinking is that when an application was to be called up for review in some random review position throughout the 2-day meeting, people were more likely to do so. Now, when you are knowingly saying "gee, let's tack on a 30-40 min discussion to the end of day 2 when everyone is eager to make an earlier flight home to see their kids"...well, I think there is less willingness to resolve scoring disparity.

I'll note that this change came along with the insertion of individual criterion scores into the summary statement. This permitted applicants to better identify when reviewers disagreed in a significant way. I mean sure, you could always infer differences of opinion from the comments without a number attached but this makes it more salient to the applicant.

Ultimately the reasons for the change don't really matter.

I still think it a worsening of the system of NIH grant review if the willingness of review panels to resolve significant differences of opinion has been reduced.

29 responses so far

Top down or bottom up? NIH RFAs are a two-way discussion between Program and Investigators

One of the erroneous claims made by Steven McKnight in his latest screed at the ASBMB President's space has to do with the generation of NIH funding priorities. Time will tell whether this is supposed to be a pivot away from his inflammatory comments about the "riff raff" that populate the current peer review study sections or whether this is an expansion of his "it's all rubbish" theme. Here he sets up a top-down / bottom-up scenario that is not entirely consistent with reality.

When science funding used to be driven in a bottom-up direction, one had tremendous confidence that a superior grant application would be funded. Regrettably, this is no longer the case. We instead find ourselves perversely led by our noses via top-down research directives coming from the NIH in the form of requests for proposals and all kinds of other programs that instruct us what to work on instead of asking us what is best.

I find it hard to believe that someone who has been involved with the NIH system as long as McKnight is so clueless about the generation of funding priorities within the NIH.

Or, I suppose, it is not impossible that my understanding is wrong and jumps to conclusions that are unwarranted.

Nevertheless.

Having watched the RFAs that get issued over the years in areas that are close to my own interests, having read the wording very carefully, thought hard about who does the most closely-related work and seeing afterwards who is awarded funding... it is my belief that in many, many cases there is a dialog between researchers and Program that goes into the issuance of a specific funding announcement.

Since I have been involved directly in beating a funding priority drum (actually several instruments have been played) with the Program staff of a particular IC in the past few years and they finally issued a specific Funding Opportunity Announcements (FOA) which has text that looks suspiciously similar to stuff that I have written, well, I am even further confident of my opinion.

The issuance of many NIH RFAs, PAs and likely RFPs is not merely "top-down". It is not only a bunch of faceless POs sitting in their offices in Bethesda making up funding priorities out of whole cloth.

They are generating these ideas in a dialog with extramural scientists.

That "dialog" has many facets to it. It consists of the published papers and review articles, conference presentations, grant applications submitted (including the ones that don't get funded), progress reports submitted, conversations on the phone or in the halls at scientific meetings. These are all channels by which we, the extramural scientists, are convincing the Program staff of what we think is most important in our respective scientific domains. If our arguments are good enough, or we are joined by enough of our peers and the Program Staff agree there is a need to stimulate applications (PAs) or secure a dedicated pool of funding (RFAs, PASs) then they issue one of their FOA.

Undoubtedly there are other inputs that stimulate FOAs from the NIH ICs. Congressional interest expressed in public or behind the scenes. Agenda from various players within the NIH ICs. Interest groups. Companies. Etc.

No doubt. And some of this may result in FOAs that are really much more consistent with McKnight's charge of "...programs that instruct us what to work".

But to suggest that all of the NIH FOAs are only "top-down" without recognizing the two-way dialog with extramural scientists is flat out wrong.

15 responses so far

Thought of the Day

Nov 05 2014 Published by under Fixing the NIH

Datahound showed is that there are something on the order of 5,000 PIs that lose their last bit of NIH funding (as PI) in a given year.

What I want to see from McKnight is some clear identification of the deserving, amazing, super star impactful scientists who are in this situation.

This would go a long way toward is being able to assess his various claims for why these people (and surely he has a substantial list in mind- tens at the least?) are not funded.

No responses yet

Older posts »