Archive for the 'NIH funding' category

Grantsmack: Overambitious

Aug 25 2015 Published by under Grant Review, NIH, NIH Careerism, NIH funding

If we are entering a period of enthusiasm for "person, not project" style review of NIH grants, then it is time to retire the criticism of "the research plan is overambitious".

Updated:
There was a comment on the Twitters to the effect that this Stock Critique of "overambitious" is a lazy dismissal of an application. This can use some breakdown because to simply dismiss stock criticisms as "lazy" review will fail to address the real problem at hand.

First, it is always better to think of Stock Critique statements as shorthand rather than lazy.

Using the term "lazy" seems to imply that the applicant thinks that his or her grant application deserves a full and meticulous point-by-point review no matter if the reviewer is inclined to award it a clearly-triagable or a clearly-borderline or clearly-fundable score. Not so.

The primary job of the NIH Grant panel reviewer is most emphatically not to help the PI to funding nor to improve the science. The reviewer's job is to assist the Program staff of the I or C which has been assigned for potential funding decide whether or not to fund this particular application. Consequently if the reviewer is able to succinctly communicate the strengths and weaknesses of the application to the other reviewers, and eventually Program staff, this is efficiency, not laziness.

The applicant is not owed a meticulous review.

With this understood, we move on to my second point. The use of a Stock Criticism is an efficient communicative tool when the majority of the review panel agrees that the substance underlying this review consideration is valid. That is, that the notion of a grant application being overambitious is relevant and, most typically, a deficiency in the application. This is, to my understanding, a point of substantial agreement on NIH review panels.

Note: This is entirely orthogonal to whether or not "overambitious" is being applied fairly to a given application. So you need to be clear about what you see as the real problem at hand that needs to be addressed.

Is it the notion of over-ambition being any sort of demerit? Or is your complaint about the idea that your specific plan is in fact over-ambitious?

Or are you concerned that it is unfair if the exact same plan is considered "over-ambitious" for you and "amazingly comprehensive vertically ascending and exciting" when someone else's name is in the PI slot?

Relatedly, are you concerned that this Stock Critique is being applied unjustifiably to certain suspect classes of PI?

Personally, I think "over-ambitious" is a valid critique, given my pronounced affection for the NIH system as project-based, not person-based. In this I am less concerned about whether everything the applicant has been poured into this application will actually get done. I trust PIs (and more importantly, I trust the contingencies at work upon a PI) of any stage/age to do interesting science and publish some results. If you like all of it, and would give a favorable score to a subset that does not trigger the Stock Critique, who cares that only a subset will be accomplished*?

The concerning issue is that a reviewer cannot easily tell what is going to get done. And, circling back to the project-based idea, if you cannot determine what will be done as a subset of the overambitious plan, you can't really determine what the project is about. And in my experience, for any given application, there are going to usually be parts that really enthuse you as a reviewer and parts that leave you cold.

So what does that mean in terms of my review being influenced by these considerations? Well, I suppose the more a plan creates an impression of priority and choice points, the less concern I will have. If I am excited by the vast majority of the experiments, the less concern I will have-if only 50% of this is actually going to happen, odds are good if I am fired up about 90% of what has been described.

*Now, what about those grants where the whole thing needs to be accomplished or the entire point is lost? Yes, I recognize those exist. Human patient studies where you need to get enough subjects in all the groups to have any shot at any result would be one example. If you just can't collect and run that many subjects within the scope of time/$$ requested, well.....sorry. But these are only a small subset of the applications that trigger the "overambitious" criticism.

41 responses so far

Grumble of the Day

Jul 06 2015 Published by under NIH, NIH Careerism, NIH funding

I still get irritated every time a PO gives me some grant advice or guidance that is discordant with my best understanding of the process. It's not so much that I take it seriously for my own strategy...I've been around this block once or twice. 

What kills me is thinking that there are poor newcomer applicants who get this advice and may think it is Gospel. This would then lead them into making suboptimal strategic or tactical decisions.

Related Reading:
POs do not understand...

28 responses so far

Ronald Germain Explains How To Fix The NIH

Continue Reading »

99 responses so far

Re-Repost: The funding is the science II, "Why do they always drop the females?"

The NIH has recently issued the first round of guidance on inclusion of Sex as a Biological Variable in future NIH research grants. I am completely behind the spirit of the initiative but I have concerns about how well this is going to work in practice. I wrote a post in 2008 that detailed some of the reasons that have brought us to the situation where the Director of the NIH felt he had to coauthor an OpEd on this topic. I believe these issues are still present, will not be magically removed with new instructions to reviewers and need to be faced head-on if the NIH is to make any actual progress on ensuring SABV is considered appropriately going forward.

The post originally appeared December 2, 2008.


The title quote came from one of my early, and highly formative, experiences on study section. In the course of discussing a revised application it emerged that the prior version of the application had included a sex comparison. The PI had chosen to delete that part of the design in the revised application, prompting one of the experienced members of the panel to ask, quite rhetorically, "Why do they always drop the females?"

I was reminded of this when reading over Dr. Isis' excellent post [Update: Original Sb post lost, I think the repost can be found here] on the, shall we say less pernicious, ways that the course of science is slanted toward doing male-based research. Really, go read that post before you continue here, it is a fantastic description.

What really motivated me, however, was a comment from the always insightful Stephanie Z:

Thank you. That's the first time I've seen someone address the reasons behind ongoing gender disparities in health research. I still can't say as it thrills me (or you, obviously), but I understand a bit better now.

Did somebody ring?

As I pointed out explicitly at least once ([Update: Original 2007 post]), research funding has a huge role in what science actually gets conducted. Huge. In my book this means that if one feels that an area of science is being systematically overlooked or minimized, one might want to take a close look at the manner by which science is funded and the way by which science careers are sustained as potential avenues for systematic remedy.

Funding

There are a couple of ways in which the generalized problems with NIH grant review lead to the rhetorical comment with which I opened the post. One very common StockCritique of NIH grant review is that of an "over ambitious" research plan. As nicely detailed in Isis' post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not "well established" and needs to be "compared to the standard gonadectomy models", LOL >sigh<.

manWomanControlPanel.jpg
Grant reviewers prefer simplicity
Keep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as "does this drug compound ameliorate cocaine addiction?" So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost? Trust me, the grant proposal is going to take fire for "scope of the project".

Another StockCritique to blame is "feasibility". Two points here really. First is the question of Preliminary Data- of course if you have to run more experimental conditions to establish that you might have a meritorious hypothesis, you are less likely to do it with a fixed amount of pilot/startup/leftover money. Better to work on preliminary data for two or three distinct applications over just one if you have the funds. Second aspect has to do with a given PIs experience with the models in question. More opportunity to say "The PI has no idea what s/he is doing methodologically" if s/he has no prior background with the experimental conditions, which are almost always the female-related ones. As we all know, it matters little that the hormonal assays or gonadectomy or whatever procedures have been published endlessly if you don't have direct evidence that you can do it. Of course, more latitude is extended to the more-experienced investigator....but then s/he is less likely to jump into gender-comparisons in a sustained way in contrast to a newly minted PI.

Then there are the various things under grantspersonship. You have limited space in a given type of grant application. The more groups and comparisons, the more you have to squeeze in with respect to basic designs, methods and the interpretation/alternative approaches part. So of course you leave big windows for critiques of "hasn't fully considered...." and "it is not entirely clear how the PI will do..." and "how the hypothesis will be evaluated has not been sufficiently detailed...".

Career

Although research funding plays a huge role in career success, it is only part of the puzzle. Another critical factor is what we consider to be "great" or "exciting" science in our respective fields.

The little people can fill in the details. This is basically the approach of GlamourMagz science. (This is a paraphrase of something the most successful GlamourMagz PI I know actually says.) Cool, fast and hot is not compatible with the metastasizing of experimental conditions that is an inevitable feature of gender-comparison science. Trouble is, this approach tends to trickle down in various guises. Lower (than GlamourMag) impact factor journals sometimes try to upgrade by becoming more NS-like (Hi, J Neuro!). Meticulous science and exacting experimental designs are only respected (if at all) after the fact. Late(r) in someone's career they start getting props on their grant reviews for this. Early? Well the person hasn't yet shown the necessity and profit for the exhaustive designs and instead they just look...unproductive. Like they haven't really shown anything yet.

As we all know splashy CNS pubs on the CV trump a sustained area of contribution in lower journals six ways to Sunday. This is not to say that nobody will appreciate the meticulous approach, they will. Just to say that high IF journal pubs will trump. Always.

So the smart young PI is going to stay away from those messy sex-differences studies. Everything tells her she should. If he does dip a toe, he's more likely to pay a nasty career price.
This is why NIH efforts to promote sex-comparison studies are necessary. Promoting special funding opportunities are the only way to tip the equation even slightly more favorable to the sex-differences side. The lure of the RFA is enough to persuade the experienced PI to write in the female groups. To convince the new PI that she might just risk it this one time.

My suspicion is that it is not enough. Beyond the simple need to take a stepwise approach to the science as detailed by Isis, the career and funding pressures are irresistible forces.

9 responses so far

NIH Mandate to Consider Sex as a Biological Variable in Grant Apps

Jun 09 2015 Published by under NIH, NIH funding, Sex Differences

The NIH has published NOT-OD-15-102 Consideration of Sex as a Biological Variable in NIH-funded Research which informs us:

This notice focuses on NIH's expectation that scientists will account for the possible role of sex as a biological variable in vertebrate animal and human studies. Clarification of these expectations is reflected in plans by NIH's Office of Extramural Research (OER) to update application instructions and review questions; once approved by the Office of Management and Budget (OMB), these updates will take effect for applications submitted for the January 25, 2016, due date and thereafter.

Also:

Accounting for sex as a biological variable begins with the development of research questions and study design. It also includes data collection and analysis of results, as well as reporting of findings. Consideration of sex may be critical to the interpretation, validation, and generalizability of research findings. Adequate consideration of both sexes in experiments and disaggregation of data by sex allows for sex-based comparisons and may inform clinical interventions. Appropriate analysis and transparent reporting of data by sex may therefore enhance the rigor and applicability of preclinical biomedical research.4

NIH expects that sex as a biological variable will be factored into research designs, analyses, and reporting in vertebrate animal and human studies. Strong justification from the scientific literature, preliminary data, or other relevant considerations must be provided for applications proposing to study only one sex. Investigators are strongly encouraged to discuss these issues with NIH program staff prior to submission of applications.

Additional information is provided in a three page PDF overview:

Literature review. Consider and describe how sex and gender may influence the research question(s) at hand. Conduct a review of the human clinical literature and any relevant preclinical literature. If there are differences between males and females in previous preclinical or clinical studies, this would provide a strong rationale for building consideration of sex into the research design and analyses of data. The absence of previous study data in an area of research does not, by itself, constitute strong justification to study only one sex.

Very nice. So helpful. Look NIH, clearly this is going to be a place where applicants who do not wish to incorporate SABV into the design are going to seek a loophole. What would be helpful here would be a more assertive statement about what does and, most importantly does not, constitute a "strong justification to study only one sex". Uncontrolled, this will devolve back to the reviewers who are already failing (going by your highly effortful and high profile new initiative) to appropriately favor* SABV in research grant proposals. They are the ones that will decide that the tiniest fig leaf of excuse making is acceptable "justification" if you give them half a chance to do so. This part needs strengthening.

and later on in the document:

Single-sex studies. Applicants must provide strong justification for applications proposing to study only one sex. Such justification may include the study of sex-specific conditions or phenomena (e.g., ovarian or prostate cancer), acutely scarce resources (e.g., non-human primates), or investigations in which the study of one sex is scientifically appropriate. The absence of evidence regarding sex differences in an area of research does not constitute strong justification to study only one sex.

Sex-specific conditions or phenomena, check. Good. Will hard-to-breed mice constitute "acutely scarce resources"? Human drug abusers of various characteristics that make it hard to recruit female or male participants? The devil will be in the detail. But "scientifically appropriate"? Again, this holds open a big old loophole of escape. And a repeat of the absence-of-evidence statement. What does this mean? What are the limits on this strong justification? How are you going to get reviewers on board with this, instead of leaving them to accept any old excuse?

Research design, data analysis, and reporting.
....Where little or no sex-specific data is available, sex-specific hypotheses may not be possible, whereas previously observed sex differences may prompt sex-specific hypotheses.

Dude what? Are you kidding with this? We all know there must be a supported hypothesis in the research plan. And if there has not been any sex-differences research in the past, well, there are no hypotheses we can advance. And therefore, so sorry, we must avoid proposing anything that investigates SABV because the study section will kill us for lack of a clear hypothesis**. Another whoppingly huge escape clause for the SABV resistant PI.

Acknowledge limitations in the applicability of findings that may arise from the samples, methods, and analyses used, in the research plan as well as in progress reports and publications.

Emphasis added. HAHAHAHHAHHAA!!!! Yeah RIGHT! Every NIH Grant awardee who does not explicitly include SABV in a paper must make sure to add the caveat in the Discussion that their results cannot be extended to the other sex. Sure that's going to happen. Sure.

Finally, one for my peers who already conduct SABV research with regularity.

Researchers working with animal models should consider if and how the female estrous cycle is relevant for experimental design and analysis; it may be relevant for some research questions and not others

This one is pointed straight at the buzz saw of the sex-differences aficionado Stock Criticism of grant applications. One of the ways that sex-differences gets stamped out of research proposals is that the "real" experts start in on "YOU MUST DO THE SEX-COMPARISONS RIGHT AND AS WE HAVE DONE". This may include cycle synchronization, gonadectomy, pharmacologico-hormonal manipulations, endless groups, etc, etc, etc.

There is little tolerance from these people for "First, let's give it a go in female (or male) animals/cells/tissues and see what we turn up" exploratory fishing expeditions.

I would argue that tolerance for fishing expeditions is precisely what the NIH needs if they want to jumpstart real change. You have to make the barrier low and, especially in this day and age, of low cost. Demanding that it has to be SABV design 101eleventy at all times or it is not worth doing is going to motivate resistance. Resistance on the part of PIs doing their grant proposing and on the part of peers doing the grant reviewing.

I propose that a NIH policy of "Any old Third Aim that will engage in sex-differences comparisons is good enough and a total freebie for the first five years***" is what is necessary.

_
*oh yes, believe you me there are puh-lenty of investigators who propose SABV aspects in proposals and get it beat out of them at review.

**StockCritiqueTM

***that may have to be slightly more formal

34 responses so far

NIGMS MIRA for ESI/NI differs slightly. Ok, fundamentally.

Jun 03 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

NIGMS has been attempting to grapple with the problem of stability in research funding for its extramural awardees. Which is a great thing to focus on, given the instability in recent years and the wasted time and effort of PIs and their laboratories which is devoted to maintaining stable funding.

In January NIGMS launched the MIRA program (R35 mechanism) to issue 5 year awards (instead of current NIGMS average of 4) of up to $750,000 in direct costs. The idea is that current NIGMS awardees would consolidate their existing NIGMS awards into this one R35, promise to devote at least 51% of their research effort to this R35 and overall take less in NIGMS funding.

The immediate objections were severalfold but more or less focused on why the already privileged NIGMS stalwarts with three or more concurrent full-modular ($250,000 direct) awards' worth of funding should now get this extra isolation from the review process. If the limited number of such individuals selected for MIRA now had research funds that were isolated from the grasp of peer review (the fifth year, the all-or-none nature of the $750,000 direct in one award) then obviously the unlucky would be further disadvantaged.

One such pool of the unlucky would be the ESI (and NI) investigators.

NIGMS assured us that they were planning to extend MIRA to ESI/NI in the very near future. Peter Preusch commented:

We plan to issue a MIRA funding opportunity for early stage investigators as quickly as possible. We hope the first application due date will be sometime this summer.

Well, RFA-GM-16-003 has arrived. And it is nothing like the real MIRA for the highly established insider club* of NIGMS extramural funding.

1) It is limited to $250,000 in direct costs
2) It will be for the duration of the "current average of R01 awards to new investigators", read 4 years, I assume. Even if the current average is 5, this can change. Why not just write in 5 as for the main MIRA?
3) Competing renewals "may" be allowed to increase substantially. There is of course no guarantee of this and if they were serious they could have simply written in language such as "the second interval will increase the limit to $500K direct and the third to $750K direct". They did not.

This is either ridiculously ill-considered or a cynical figleaf designed to give political cover for the excesses of the real target, the MIRA for the highly-established.

Here is what is so fundamentally foot-shooting about this, if you assume that NIGMS has any interest in shepherding the careers of their future stalwarts. The current stalwarts they are trying to protect are multi-grant awardees. Three full-modular and two-plus if you assume one of those awards is a traditional budget up to the $500K stiff (but not insurmountable) limit. Yet here they are trying to take what might be thought of as this same population at an earlier career stage and making sure they only get one full-modular worth of NIGMS funding from the start. This is insanely ill-considered.

And no ESI PI who thinks of herself as a future multi-grant NIGMS stalwart (and perhaps real-MIRA qualified) should have any interest in this baby MIRA whatsoever. All it comes with are limits for such a PI.

A secondary consideration is the review of such applications. Wisely, NIGMS has made this an RFA which means they get to design their own review panels.

This is wise because these special-flower-protection grants (real MIRA and baby-MIRA alike) stand a good risk of getting shredded in regular study sections. I'm thinking there is a good risk of them getting shredded in whatever SEPs they manage to convene too, unless they do a good job of selecting quid-pro-quo qualified reviewers.

Related Aside: BigMechs like Program Projects and Centers are very often reviewed by panels of other BigMech Program Directors and component PIs. This is consistent with the general requirement that grants should be reviewed by panels with like-experience. However, this lets in a great deal of quid-pro-quo reviewing in the sense that the reviewers know these applicants will be coming back to review their Boondoggles, sorry BigMechs when they are up for competing review. Thus, these mechanisms are very unlikely to face review of the kind that disagrees fundamentally with the concept of the BigMech. Unlikely to get anyone saying "none of this is worth the cost, these shouldn't be funded and the money should be put back** in the R-mech pool".

Regular R-mech study sections are disproportionally staffed by midcareer scientists. Given the likely number of MIRAs on offer, disproportionally staffed by scientists who will not feel like they have the slightest chance at a MIRA award. I predict a good deal of skepticism from the general reviewer about these R35 mechanisms and I predict very bad scores.

Which is why NIGMS will have to be careful to cherry pick a quid-pro-quo qualified reviewer pool. And, as is usually the case with BigMechs, be prepared to fund them with scores that would not be remotely competitive for regular R01 review.

__
*Remember, PIs with more than two concurrent RPGs are less than 9% of the entire NIH funded population (in FY2009, according to Rockey). How many can there be with three or more NIGMS awards?

**there are some technicalities with pools of $$ that make this slightly more complicated than this but you get the flavor

29 responses so far

Thought on the Ginther report on NIH funding disparity

May 24 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

I had a thought about Ginther just after hearing a radio piece on the Asian-Americans that are suing Harvard over entrance discrimination. 

The charge is that Asian-American students need to have better grades and scores  than white students to receive an admissions bid. 

The discussion of the Ginther study revolved around the finding that African-American applicant PIs were less likely than PIs of other groups to receive NIH grant funding. This is because Asian-Americans, for example, did as well as white PIs. Our default stance, I assume, is that being a white PI is the best that it gets. So if another group does as well, this is evidence of a lack of bias. 

But what if Asian-American PIs submit higher quality applications as a group? 
How would we ever know if there was discrination against them in NIH grant award?

20 responses so far

Thoughts on NIH grant strategy from Associate Professor H. Solo

We spend a fair amount of time talking about grant strategy on this blog. Presumably, this is a reflection of an internal process many of us go through trying to decide how to distribute our grant writing effort so as to maximize our chances of getting funded. After all we have better things to do than to write grants.

So we scrutinize success rates for various ICs, various mechanisms, FOAs, etc as best we are able. We flog RePORTER for evidence of which study sections will be most sympathetic to our proposals and how to cast our applications so as to be attractive. We worry about how to construct our Biosketch and who to include as consultants or collaborators. We obsess over how much preliminary data is enough (and too much*).

This is all well and good and maybe, maybe....perhaps....it helps.

But at some level, you have to follow your gut, too. Even when the odds seem overwhelmingly bad, there are going to be times when dang it, you just feel like this is the right thing to do.

Submitting an R01 on very thin preliminary data because it just doesn't work as an R21 perhaps.

Proposing an R03 scope project even if the relevant study section has only one** of them funded on the RePORTER books.

Submitting your proposal when the PO who will likely be handling it has already told you she hates your Aims***.

Revising that application that has been triaged twice**** and sending it back in as a A2asA0 proposal.

I would just advise that you take a balanced approach. Make your riskier attempts, sure, but balance those with some less risky applications too.

I view it as....experimenting.

__
*Just got a question about presenting too much preliminary data the other day.

**of course you want to make sure there is not a structural issue at work, such as the section stopped reviewing this mechanism two years ago.

***1-2%ile scores have a way of softening the stony cold heart of a Program Officer. Within-payline skips are very, very rare beasts.

****one of my least strategic behaviors may be in revising grants that have been triaged. Not sure I've ever had one funded after initial triage and yet I persist. Less so now than I used to but.....I have a tendency. Hard headed and stupid, maybe.

13 responses so far

Fighting with the New Biosketch format

I have been flailing around, of and on for a few months, trying to write my Biosketch into the new format [Word doc Instructions and Sample].

 

I am not someone who likes to prance around bragging about "discoveries" and unique contributions and how my lab's work is I am so awesomely unique because, let's face it, I don't do that kind of work. I am much more of a work-a-day type of scientist who likes to demonstrate stuff that has never been shown before. I like to answer what are seemingly obvious questions for which there should be lots of literature but then it turns out that there is not. I like to work on what interests me about the world and I am mostly uninterested in what some gang of screechy monkey GlamourHumpers think is the latest and greatest.

Ahem.

This is getting in the way of my ability to:

Briefly describe up to five of your most significant contributions to science. For each contribution, indicate the historical background that frames the scientific problem; the central finding(s); the influence of the finding(s) on the progress of science or the application of those finding(s) to health or technology; and your specific role in the described work.

Now interestingly, it was someone who works in a way most unlike the way I do that showed me the light. Actually, he gave me the courage to think about ignoring this supposed charge in the sample / instruction document. This person recommended just writing a brief sentence or two about the area of work without trying to contextualize the importance or significance of the "contribution". I believe I actually saw one of the five permitted subheadings on his version that was more or less "And here's some other stuff we work on that wasn't easily categorized with the rest of it."

I am at least starting from this minimalist standpoint. I don't know if I will have the courage to actually submit it like this, but I'm leaning towards doing so.

I have been hearing from quite a number of you that you are struggling with creating this new version of the NIH Biosketch. So I thought I'd open it up to comment and observation. Anyone have any brilliant solutions / approaches to recommend?

UPDATE:
One of the things that has been bothering me most about this is that it takes the focus off of your work that is specific to the particular application in question. In the most recent version of the Biosketch, you selected 15 pubs that were most directly relevant to the topic at hand. These may not be your "most significant contributions" but they are the ones that are most significant for the newly proposed studies.

If one is now to list "your most significant contributions", well, presumably some of these may not have much to do with the current application. And if you take the five sections seriously, it is hard to parse the subset of your work that is relevant to one focal R01 sized project into multiple headings and still show now those particular aspects are a significant contribution.

I still think it is ridiculous that they didn't simply make this an optional way to do the Biosketch so as to accommodate those people that needed to talk about non-published scholarly works.

64 responses so far

McKnight posts an analysis of NIH peer review

Apr 08 2015 Published by under NIH, NIH Budgets and Economics, NIH funding, Peer Review

Sortof.

In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.

He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.

Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.

The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .

As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:

48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.

These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.

This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?

Anyway, why focus on these folks?

I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.

So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.

So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.


Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.

Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.

McKnight closes with this:

I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.

Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".

This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.

McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?

h/t: philapodia

51 responses so far

Older posts »