Archive for the 'NIH' category

Your Grant in Review: Power analysis and the Vertebrate Animals Section

Feb 11 2016 Published by under Grant Review, Grantsmanship, NIH funding

As a reminder, the NIH issued warning on upcoming Simplification of the Vertebrate Animals Section of NIH Grant Applications and Contract Proposals.

Simplification! Cool, right?

There's a landmine here.

For years the statistical power analysis was something that I included in the General Methods at the end of my Research Strategy section. In more recent times, a growing insistence on the part of the OLAW that a proper Vertebrate Animals Section include the power analysis has influenced me to drop the power analysis from the Research Strategy. It became a word for word duplication so it seemed worth the risk to regain the page space.

The notice says:

Summary of Changes
The VAS criteria are simplified by the following changes:

  • A description of veterinary care is no longer required.

  • Justification for the number of animals has been eliminated.

  • A description of the method of euthanasia is required only if the method is not consistent with AVMA guidelines.

 

This means that if I continue with my current strategy, I'm going to start seeing complaints about "where is the power analysis" and "hey buddy, stop trying to evade page limits by putting it in the VAS".

So back to the old way we must go. Leave space for your power analysis, folks.
__
If you don't know much about doing a power analysis, this website is helpful: http://homepage.stat.uiowa.edu/~rlenth/Power/

9 responses so far

CSR Head Nakamura Makes Bizarre Pronouncement

Feb 02 2016 Published by under Careerism, NIH Careerism

An email from the CSR of the NIH hit late yesterday a few days ago, pointing to a number of their Peer Review Notes including one on the budget bump that we are about to enjoy.
Actually that should be "some selected few of us will enjoy" because

“While $2 billion is a big increase, it is less than a 10 percent increase, and a large portion of it is set aside for specific areas and initiatives,” said Dr. Nakamura. “Competition for funding is still going to be intense, and paylines will not return to historic averages . . .

Yeah, as suspected, that money is already accounted for.

The part that has me fired up is the continuation after that ellipsis and a continuing header item.

So make sure you put your best effort into your application before you apply.”

Counterproductive Efforts
“We know some research deans have quotas and force their PIs to submit applications regularly,” said Dr. Nakamura. “It’s important for them to know that university submission rates are not correlated with grant funding. Therefore, PIs should be encouraged to develop and submit applications as their research and ideas justify the effort to write them and have other scientists review them.”

As usual I do not know if this is coming from ignorance or calculated strategy to make their numbers look better. I fear both possibilities. I'm going from memory here because I can't seem to rapidly find the related blog post or data analysis but I think I recall an illustration that University-total grant submission rates did not predict University-total success rates.

At a very basic level Nakamura is using the lie of the truncated distribution. If you don't submit any grant applications, your success rate is going to be zero. I'm sure he's excluding those because seemingly that would make a nice correlation.

But more importantly, he is trying to use university-wide measures to convince the individual PI what is best for her to do.

Wrong. Wrong. Wrong.

Not everyone's chances at that institution are the same. The more established investigators will probably, on average, enjoy a higher success rate. They can therefore submit fewer applications. Lesser folk enjoy lower success rates so therefore they have to keep pounding out the apps to get their grants.

By extension, it takes very little imagination to understand that depending on your ratio of big important established scientists to noobs, and based somewhat on subfields, the apparent University-wide numbers are going to swamp out the information that is needed for each individual PI.

In short, this is just another version of the advice to young faculty to "write better grants, just like the greybeards do".

The trick is, the greybeards DO NOT WRITE BETTER GRANTS! I mean sure, yes, there is a small experience factor there. But the major driver is not the objective quality but rather the established track record of the big-deal scientist. This gives them little benefits of the doubt all over the place as we have discussed on this blog endlessly.

I believe I have yet to hear from a new-comer to NIH grant review that has not had the experience within 1-2 rounds of a reviewer ending his/her review of a clearly lower-quality grant proposal with "....but it's Dr. BigShot and we know she does great work and can pull this off". Or similar.

I have been on a study section round or two in my day and I am here to tell you. My experience is not at all consistent with the idea that the "best" grants win out. Merit scores are not a perfect function of objective grant quality at all. Imperfectly prepared or boring grants get funded all the time. Really exciting and nearly-perfect grants get unfundable scores or triaged. Frequently.

This is because grant review hinges on the excitement of the assigned reviewers for the essence of the project. All else is detail.

You cannot beat this system by writing a "perfect" grant. Because it may not be perfect for all three reviewers no matter how well it has been prepared and how well vetted by whatever colleagues you have rounded up to advise you.

Nakamura should know this. He probably does. Which makes his "advice" a cynical ploy to decrease submissions so that his success rate will look better.

One caveat: I could simply be out of touch with all of these alleged Dean-motivated crap apps. It is true that I have occasionally seen people throw up grant applications that really aren't very credible from my perspective. They are very rare. And it has occasionally been the case that at least one other reviewer liked something about an application I thought was embarrassingly crappy. So go figure.

I also understand that there are indeed Deans or Chairs that encourage high submission rates and maybe this leads to PIs writing garbage now and again. But this does not account for the dismal success rates we are enjoying. I bet that magically disappearing all apps that a PI submitted to meet institutional vigor requirements (but didn't really mean to make a serious play for an award) would have no perceptible effect on success rates for the rest of us. I just haven't ever seen enough non-credible apps for this to make a difference. Perhaps you have another experience on study section, DearReaders?

Finally, I really hate this blame-the-victim attitude on the part of the CSR and indeed many POs. There are readily apparent and demonstrable problems with how some categories of PIs' grants are reviewed. Newer and less experienced applicants. African-American PIs. Women. Perhaps, although this is less well-explicated lately, those from the wrong Universities.

For the NIH to avoid fixing their own problems with review (for example the vicious cycle of study sections punishing ESI apps with ever-worsening scores when the NIH used special paylines to boost success rates) and then blame victims of these problems by suggesting they must be writing bad grants takes chutzpah. But it is wrong. And demoralizing to so many who are taking it on the chin in the grant review game.

And it makes the problems worse. How so? Well, as you know, Dear Reader I am firmly convinced that the only way to succeed in the long term is to keep rolling the reviewer dice, hoping to get three individuals who really get what you are proposing. And to take advantage of the various little features of the system that respond to frequent submissions (reviewer sympathy, PO interest, extra end of year money, ARRA, sudden IC initiatives/directions, etc). Always, always you have to send in credible proposals. But perfect vs really good guarantees you nothing. And when perfect keeps you from submitting another really good grant? You are not helping your chances. So for Nakamura to tell people to sacrifice the really good for the perfect he is worsening their chances. Particularly when the people are in those groups who are already at a disadvantage and need to work even harder* to make up for it.

__
*Remember, Ginther showed that African-American PIs had to submit more revisions to get funded.

25 responses so far

Your Grant in Review: Competing Continuation, aka Renewal, Apps

Jan 28 2016 Published by under Grant Review, NIH, NIH Careerism

In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
__
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

15 responses so far

Your Grant in Review: Skin in the Game

Jan 25 2016 Published by under Careerism, NIH Careerism, Tribe of Science

Should people without skin in the game be allowed to review major research grants?

I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists...

On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.

On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.

Thoughts?

Would you prefer review by those who are subject to the funding system? Or doesn't it matter?

46 responses so far

Where will your lab be in 5 years?

Scientifically, that is.

I like the answer Zoe gave for her own question.

I, too, just hope to be viable as a grant funded research laboratory. I have my desires but my confidence in realizing my goals is sharply limited by the fact I cannot count on funding.

Edited to add:

When I was a brand new Assistant Professor I once attended a career stage talk of a senior scientist in my field. It wasn't an Emeritus wrap-up but it was certainly later career. The sort of thing where you expect a broad sweeping presentation of decades of work focused around a fairly cohesive theme.

The talk was "here's the latest cool finding from our lab". I was.....appalled. I looked over this scientist's publication record and grant funding history and saw that it was....scattered. I don't want to say it was all over the place, and there were certain thematic elements that persisted. But this was when I was still dreaming of a Grande Arc for my laboratory. The presentation was distinctly not that.

And I thought "I will be so disappointed in myself if I reach that stage of my career and can only give that talk".

I am here to tell you people, I am definitely headed in that direction at the moment. I think I can probably tell a slightly more cohesive story but it isn't far away.

I AM disappointed. In myself.

And of course in the system, to the extent that I think it has failed to support my "continuous Grande Arc Eleventy" plans for my research career.

But this is STUPID. There is no justifiable reason for me to think that the Grande Arc is any better than just doing a good job with each project, 5 years of funding at a time.

137 responses so far

NCI will ease that difficult transition to postdoc

I am still not entirely sure this is not an elaborate joke.

http://grants.nih.gov/grants/guide/rfa-files/RFA-CA-16-005.html

The purpose of the NCI Predoctoral to Postdoctoral Fellow Transition Award (F99/K00) is to encourage and retain outstanding graduate students who have demonstrated potential and interest in pursuing careers as independent cancer researchers. The award will facilitate the transition of talented graduate students into successful cancer research postdoctoral appointments, and provide opportunities for career development activities relevant to their long-term career goals of becoming independent cancer researchers.

The need for a transition mechanism that graduate students can apply for is really unclear to me.

Note: These are open to non-citizens on the appropriate visa. This is unlike the NRSA pre- and post-doc fellowships.

27 responses so far

Congress let the NIH drop the HIV/AIDS set-aside: Implications for NIDA?

Dec 15 2015 Published by under Drug Abuse Science, NIH, NIH Budgets and Economics

Jocelyn Kaiser reported in Science Insider:

the National Institutes of Health (NIH) today announced it will no longer support setting aside a fixed 10% of its budget—or $3 billion this year—to fund research on the disease. The agency also plans to reprogram $65 million of its AIDS research grant funding this year to focus more sharply on ending the epidemic.

Whoa. Big news. This is an old Congressional mandate so presumably it needs Congress to be on board. More from Kaiser:

The changes follow growing pressure in Congress and from some advocacy groups for NIH to reallocate its funding based on the public health burden a disease causes.... some patient groups and members of Congress have recently asked why AIDS receives disproportionately far more than diseases with higher death rates, such as heart disease and Alzheimer’s....Last year, Congress omitted instructions asking NIH to maintain the 10% AIDS set aside.

Emphasis added. An act by omission is good enough for gov'mint work, eh? Congress is on board.

@jocelynkaiser was kind enough to link to relevant NIH budgetary distributions:

As you can see, NIDA devotes about $300M to HIV/AIDS research. The annual NIDA budget allocation is about $1B so you can see that something on the order of 30% of the NIDA budget is (and has been) devoted to this Congressional Mandate.

Wait, whut? What about that 10% mandate above? Yep, the HIV/AIDS money has not been evenly distributed across the ICs.

Now, I don't know exactly when and how all of this shook down. It was FY 1987 when the NIAID budget went up by something like 47% when other similarly sized ICs didn't see such a large percentile increase. Clearly 1986 was when Congress got serious about HIV/AIDS research. We can't assess the meaning of

AIDS has received 10% of NIH’s overall budget since the early 1990s, when Congress and NIH informally agreed it should grow in step with NIH’s overall budget.
...
NIH must treat AIDS dollars as a distinct pot of money within its overall budget. That is because a 1993 law carved out a separate HIV/AIDS budget, Collins says. And undoing that law would take action by Congress.

from this article. It is a little frustrating, to be frank. But...on to the NIDA situation.

NIDA doesn't appear in the NIH tables until FY1993 because it didn't actually join the NIH until 1992. Nevertheless that history page on NIDA notes:

1986: The dual epidemics of drug abuse and HIV/AIDS are recognized by Congress and the Administration, resulting in a quadrupling of NIDA funding for research on both major diseases.

There are many ways of looking at this situation.

Some in the NIDA world who are not all that interested in HIV/AIDS matters complain bitterly about why "A third of our budget is reserved for HIV/AIDS". Our.

Another way of looking at this would be "If Congress mandated NIH devote 10% of its budget to HIV/AIDS but NIH did this by incorporating NIDA with its existing HIV/AIDS funding then the entire rest of NIH is shirking its response to the mandate on the back of NIDA".

And yet a final way of looking at this* would be "Dude, NIDA wouldn't even have this money if not for Congress' interest in funding HIV/AIDS research so it isn't 'our' funding being diverted to HIV/AIDS research."

Is this important? Yes and no.

The news is potentially huge for those who seek to get the HIV/AIDS funding via NIDA grants and for those who seek non-HIV/AIDS funding. It makes matters slightly better for the latter and worse for the former. Right? If there is no special set-aside, the latter folks now have at least a shot at that $300M that had been out of reach for them. This consequently increases the competition for those who have HIV/AIDS relevant proposals. Who are presumably sad right now.

But it all depends on what Collins plans to do with his newly won freedom. Back to Kaiser:

Francis Collins agrees: At a meeting of his Advisory Committee to the Director (ACD) today, he noted that no other disease receives a set proportion of the NIH budget and the argument that AIDS still deserves such a set-aside is “not a defensible one.”

The end of the set-aside has “free[d] us up” to refocus NIH’s AIDs portfolio, Collins says.

However the article only then talks about $65M being reprioritized. What about the rest of the 10% of the ~$30B / yr NIH budget? No idea.

So I want to know a few things. Is the $300M in the NIDA budget that goes to HIV/AIDS part of this 10% overall NIH mandate? If so, will Collins try to claw that back for some other agenda?

If a miracle occurs and it stays within NIDA, will Nora Volkow use this new-found freedom to ease the pressure on the non-HIV/AIDS researchers by letting them (ok, "us") get a shot at that previously-sequestered pool?

Or will Volkow use it to pay for the latest boondoggle initiatives of ABCD and BRAINI?

The way I hear it, this latter is likely to happen because up to this point all other NIDA initiatives are being squeezed** to make ABCD and BRAINI happen.

Obviously I would prefer to see Volkow choose to use this new freedom a little more democratically by spreading the love across all of the portfolio.

__
*this has been my view for some time now.

**this manifests, IME, as profound pessimism on the part of POs that anything in the grey zone (which is robust reality at no-public-payline-NIDA) will be picked up because all spare change is going to the two aforementioned boondoggles.

27 responses so far

Your Grant In Review: Errors of fact from incompetent reviewers

Dec 03 2015 Published by under Grantsmanship, NIH, NIH Careerism

Bjoern Brembs has posted a lengthy complaint about the errors of fact made by incompetent reviewers of his grant application.

I get it. I really do. I could write a similar penetrating expose of the incompetence of reviewers on at least half of my summary statements.

And I will admit that I probably have these thoughts running through my mind on the first six or seven reads of the summary statements for my proposals.

But I'm telling you. You have to let that stuff eventually roll off you like water off the proverbial duck's back. Believe me*.

Brembs:

Had Reviewer #1 been an expert in the field, they would have recognized that in this publication there are several crucial control experiments missing, both genetic and behavioral, to draw such firm conclusions about the role of FoxP.
...
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers.

Speaking for the NIH grant system only, you are an idiot if you expect this level of "expert peer" as the assigned reviewers to each and every one of your applications. I am not going to pretend to be an expert in this issue but even I can suspect that the body of work on this area does not lead each and every person who is "expert" to the same conclusion. And therefore even an expert might disagree with Brembs on what reviewers should "recognize". A less-than-expert is going to be subject to a cursory or rapid reading of related literature or, perhaps, an incomplete understanding from a prior episode of attending to the issue.

As a grant applicant, I'm sorry, but it is your job to make your interpretations clear, particularly if you know there are papers pointing in different directions in the literature.

More 'tude from the Brembster:

For the non-expert, these issues are mentioned both in our own FoxP publication and in more detail in a related blog post.
...
These issues are not discussed in the proposal, as we expect the reviewers to be expert peers. Discussing them at length on, e.g., a graduate student level, would substantially increase the length of the proposal.

These are repeated several times triumphantly as if they are some excellent sick burn. Don't think like this. First, NIH reviewers are not expected to do a lot of outside research reading your papers (or others') to apprehend the critical information needed to appreciate your proposal. Second, NIH reviewers are explicitly cautioned not to follow links to sites controlled by the applicant. DO. NOT. EXPECT. REVIEWERS. TO. READ. YOUR. BLOG! ...or your papers.

With respect to "graduate student level", it will be better for you to keep in mind that many peers who do not work directly in the narrow topic you are proposing to study have essentially a graduate student level acquaintance with your topic. Write your proposal accordingly. Draw the reader through it by the hand.

__
*Trump voice

79 responses so far

Scenes

In the past few weeks I have been present for the following conversation topics.

1) A tech professional working for the military complaining about some failure on the part of TSA to appropriately respect his SuperNotATerrorist pass that was supposed to let him board aircraft unmolested...unlike the the rest of us riff raff. I believe having his luggage searched in secondary was mentioned, and some other delays of minor note. This guy is maybe early thirties, very white, very distinct regional American accent, good looking, clean cut... your basic All-American dude.

2) A young guy, fresh out of the military looking to get on with one of the uniformed regional service squad types of jobs. This conversation involved his assertions that you had to be either a woman or an ethnic minority to have a shot at the limited number of jobs available in any given cycle. Much of the usual complaining about how this was unfair and it should be about "merit" and the like. Naturally this guy is white, clean cut, relatively well spoken.... perhaps not all that bright, I guess.

3) A pair of essentially the most privileged people I know- mid-adult, very smart, blonde, well educated, upper middle class, attractive, assertive, parents, rock of community type of women. Literally *everything* goes in these women's direction and has for most of their lives. They had the nerve to engage in a long running conversation about their respective minor traffic stops and tickets and how unfair it was. How the cops should have been stopping the "real" dangers to society at some other location instead of nailing them for running a stop sign a little too much or right on red-ing or whatever their minor ticket was for.

One of the great things about modern social media is that, done right, it is a relatively non-confrontational way to start to see how other people view things. For me the days of reading science blogs and the women-in-academics blogs were a more personal version of some of the coursework I enjoyed in my liberal arts undergraduate education. It put me in touch with much of the thinking and experiences of women in my approximate career. It occasionally allowed me to view life events with a different lens than I had previously.

It is my belief that social media has also been important for driving the falling dominoes of public opinion on gay marriage over the past decade or so. Facebook connections to friends, family and friends of the same provides a weekly? daily? reminder that each of us know a lot of gay folks that are important to us or at the very least are important to people that are important to us.

The relentless circulation of memes and Bingo cards, of snark and hilarity alike, remind each of us that there is a viewpoint other than our own.

And the decent people listen. Occasionally they start to see things the way other people do. At least now and again.

The so-called Black Twitter is similar in the way it penetrated the Facebook and especially Twitter timelines and daily RTs of so many non-AfricanAmerican folks . I have watched this develop during Ferguson and through BlackLivesMatter and after shooting after shooting after shooting of young black people that has occurred in the past two years.

During the three incidents that I mention, all I could think was "Wow, do you have any idea that this is the daily reality for many of your fellow citizens? And that it would hardly ever occur to non-white people to be so blindly outraged that the world should dare to treat them this way?" And "Wait, so are you saying it sucks to have a less-assured chance of gaining the career benefits you want due to the color of your skin or the nature of your dangly bits....it'll come to you in a minute".

This brings me to today's topic in academic science.

Nature News has an editorial on racial disparity in NIH grant awards. As a reminder the Ginther report was published in 2011. There are slightly new data out, generated from a FOIA request:

Pulmonologist Esteban Burchard and epidemiologist Sam Oh of the University of California, San Francisco, shared the data with Nature after obtaining them from the NIH through a request under the Freedom of Information Act. The figures show that under-represented minorities have been awarded NIH grants at 78–90% the rate of white and mixed-race applicants every year from 1985 to 2013

I will note that Burchard and Oh seem to be very interested in how the failure to include a diverse population in scientific studies may limit health care equality. So this isn't just about career disparity for these scientists, it is about their discipline and the health outcomes that result. Nevertheless, the point of these data are that under-represented minority PIs have less funding success than do white PIs. The gap has been a consistent feature of the NIH landscape through thick and thin budgets. Most importantly, it has not budged one bit in the wake of the Ginther report in 2011. With that said, I'm not entirely sure what we have learned here. The power of Ginther was that it went into tremendous analytic detail trying to rebut or explain the gross disparity with all of the usual suspect rationales. Trying....and failing. The end result of Ginther was that it was very difficult to make the basic disparate finding go away by considering other mediating variables.

After controlling for the applicant's educational background, country of origin, training, previous research awards, publication record, and employer characteristics, we find that black applicants remain 10 percentage points less likely than whites to be awarded NIH research funding.

The Ginther report used NIH grant data between FY 2000 and FY 2006. This new data set appears to run from 1985 to 2013, but of course only gives the aggregate funding success rate (i.e. the per-investigator rate), without looking at sub-groups within the under-represented minority pool. This leaves a big old door open for comments like this one:

Is it that the NIH requires people to state their race on their applications or could it be that the black applications were just not as good? Maybe if they just keep the applicant race off the paperwork they would be able to figure this out.

and this one:

I have served on many NIH study sections (peer review panels) and, with the exception of applicants with asian names, have never been aware of the race of the applicants whose grants I've reviewed. So, it is possible that I could have been biased for or against asian applicants, but not black applicants. Do other people have a different experience?

This one received an immediate smackdown with which I concur entirely:

That is strange. Usually a reviewer is at least somewhat familiar with applicants whose proposals he is reviewing, working in the same field and having attended the same conferences. Are you saying that you did not personally know any of the applicants? Black PIs are such a rarity that I find it hard to believe that a black scientist could remain anonymous among his or her peers for too long.

Back to social media. One of the tweeps who is, I think, pretty out as an underrepresented minority of science had this to say:


Not entirely sure it was in response to this Nature editorial but the sentiment fits. If AfricanAmerican PIs who are submitting grants to the NIH after the Ginther report was published in the late summer of 2011 (approximately 13 funding rounds ago, by my calendar) were expecting the kind of relief provided immediately to ESI PIs.....well, they are still looking in the mailbox.

The editorial

The big task now is to determine why racial funding disparities arise, and how to erase them. ...The NIH is working on some aspects of the issue — for instance, its National Research Mentoring Network aims to foster diversity through mentoring.

and the News piece:

in response to Kington’s 2011 paper, the NIH has allocated more than $500 million to programmes to evaluate how to attract, mentor and retain minority researchers. The agency is also studying biases that might affect peer review, and is interested in gathering data on whether a diverse workforce improves science.

remind us of the entirely toothless NIH response to Ginther.

It is part and parcel of the vignettes I related at the top. People of privilege simply cannot see the privileges they enjoy for what they are. Unless they are listening. Listening to the people who do not share the set of privileges under discussion.

I think social media helps with that. It helps me to see things through the eyes of people who are not like me and do not have my particular constellations of privileges. I hope even certain Twitter-refuseniks will come to see this one day.

90 responses so far

A simple suggestion for Deputy Director for Extramural Research Lauer and CSR Director Nakamura

Nov 19 2015 Published by under Fixing the NIH, NIH, NIH Careerism

Michael S. Lauer, M.D., and Richard Nakamura, Ph.D. have a Perspective piece in the NEJM which is about "Reviewing Peer Review at the NIH". The motivation is captured at the end of the first paragraph:

Since review scores are seen as the proximate cause of a research project's failure to obtain support, peer review has come under increasing criticism for its purported weakness in prioritizing the research that will have the most impact.

The first half or more of the Perspective details how difficult it is to even define impact, how nearly impossible it is to predict in advance and ends up with a very true observation "There is a robust literature showing that expert opinion often fails to predict the future." So why proceed? Well, because

On the other hand, expert opinion of past and current performance has been shown to be a robust measure; thus, peer review may be more helpful when used to assess investigators' track records and renewal grants, as is typically done for research funded by the Howard Hughes Medical Institute and the NIH intramural program.

This is laughably illogical when it comes to NIH grant awards. What really predicts future performance and scientific productivity is who manages to land the grant award. The money itself facilitates the productivity. And no, they have never ever done this test I guarantee you. When have they ever handed a whole pile of grant cash to a sufficient sample of the dubiously-accomplished (but otherwise reasonably qualified) and removed most funding from a fabulously productive (and previously generously-funded) sample and looked at the outcome?

But I digress. The main point comes later when the pair of NIH honchos are pondering how to, well, review the peer review at the NIH. They propose reporting broader score statistics, blinding review*, scoring renewals and new applications in separate panels and correlating scores with later outcome measures.

Notice what is missing? The very basic stuff of experimental design in many areas of research that deal with human judgment and decision making.

TEST-RETEST RELIABILITY.

INTER-RATER RELIABILITY.

Here is my proposal for Drs. Lauer and Nakamura. Find out first if there is any problem with the reliability of review for proposals. Take an allocation of grants for a given study section and convene a parallel section with approximately the same sorts of folks. Or get really creative and split the original panels in half and fill in the rest with ad hocs. Whenever there is a SEP convened, put two or more of them together. Find out the degree to which the same grants get fundable scores.

That's just the start. After that, start convening parallel study sections to, again, review the exact same pile of grants except this time change the composition to see how reviewer characteristics may affect outcome. Make women-heavy panels, URM-heavy panels, panels dominated by the smaller University affiliations and/or less-active research programs. etc.

This would be a great chance to pit the review methods against each other too. They should review an identical pile of proposals in traditional face-to-face meetings versus phone-conference versus that horrible web-forum thing.

Use this strategy to see how each and every aspect of the way NIH reviews grants now might contribute to similar or disparate scores.

This is how you "review peer review" gentlemen. There is no point in asking if peer review predicts X, Y or Z outcome for a given grant when funded if it cannot even predict itself in terms of what will get funded.

__
*And by the way, when testing out peer review, make sure to evaluate the blinding. You have to ask the reviewers to say who they think the PIs are, their level of confidence, etc. And you have to actually analyze the results intelligently. It is not enough to say "they missed most of the time" if either the erroneous or correct guesses are not randomly distributed.

Additional Reading: Predicting the future

In case you missed it, the Lauer version of Rock Talk is called Open Mike.

Cite:
Reviewing Peer Review at the NIH
Michael S. Lauer, M.D., and Richard Nakamura, Ph.D.
N Engl J Med 2015; 373:1893-1895November 12, 2015
DOI: 10.1056/NEJMp1507427

35 responses so far

Older posts »