Archive for the 'NIH' category

Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

Mar 13 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding, Peer Review

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I'd like to see more chit chat on the implicit question from the last post.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

*ok, maybe sometimes. but always?

13 responses so far

CSR and critics need to get together on their grant review studies

Mar 12 2018 Published by under Fixing the NIH, NIH, Peer Review

I was critical of a recent study purporting to show that NIH grant review is totally random because of structural flaws that could not have been designed more precisely to reach a foregone conclusion.

I am also critical of CSR/NIH self-studies. These are harder to track because they are not always published or well promoted. We often only get wind of them when people we know are invited to participate as reviewers. Often the results are not returned to the participants or are returned with an explicit swearing to secrecy.

I've done a couple of these self-study reviews for CSR.

I am not impressed by their designs either. Believe me.

As far as I've heard or experienced, most (all) of these CSR studies have the same honking flaw of restricted range. Funded applications only.

Along with other obscure design choices that seem to miss the main point*. One review pitted apps funded from closely-related sections against each other. ....except "closely-related" did not appear that close to me. It was more a test of whatever historical accident made CSR cluster those study sections or perhaps a test of mission drift. A better way would have been to cluster study sections to which the same PI submits. Or by assigned PO maybe? By a better key word cluster analysis?

Anyway, the CSR designs are usually weird when I hear about them. They never want to convene multiple panels of very similar reviewers to review the exact same pile of apps in real time. Reporting on their self-studies is spotty at best.

This appears to my eye to be an attempt to service a single political goal. I.e. "Maintain the ability to pretend to Congress that grant review selects only the most meritorious applications for funding with perfect fidelity".

Th critics, as we've seen, do the opposite. Their designs are manipulated to provide a high probability of showing NIH grant review is utterly unreliable and needs to be dismantled and replaced.

Maybe the truth lies somewhere in the middle? And if these forces would combine to perform some better research we could perhaps better trust jointly proposed solutions.


*I include the "productivity" data mining. NIH also pulls some sketchy stuff with these studies. Juking it carefully to support their a priori plans, rather than doing the study first and changing policy after.

3 responses so far

Agreement among NIH grant reviewers

Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:

We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

emphasis added.

This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. "Aha!" cry the disgruntled applicants, "This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah."

A smaller set of voices expressed perplexed confusion. "Weird", we say, "but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole."

So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.

This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.

In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.

So. The authors claims are bogus. Ridiculously so. They did not "replicate" the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don't have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the "top-[insert favored value of 2-3 times the current paylines]" scoring grants are all worthy of funding and have very little objective space between them.

Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don't understand these people.
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018,

*And we're not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow

20 responses so far

What does it mean if a miserly PI won't pay for prospective postdoc visits?

Feb 20 2018 Published by under Careerism, NIH Careerism

It is indubitably better for the postdoctoral training stint if the prospective candidate visits the laboratory before either side commits. The prospective gets a chance to see the physical resources, gets a chance for very specific and focused time with the PI and above all else, gets a chance to chat with the lab's members.

The PI gets a better opportunity to suss out strengths and weaknesses of the candidate, as do the existing lab members. Sometimes the latter can sniff things out that the prospective candidate does not express in the presence of the PI.

These are all good things and if you prospective trainees are able to visit a prospective training lab it is wise to take advantage.

If memory serves the triggering twittscussion for this post started with the issue of delayed reimbursement of travel and the difficulty some trainees have in floating expenses of such travel until the University manages to cut a reimbursement check. This is absolutely an important issue, but it is not my topic for today.

The discussion quickly went in another direction, i.e. if it is meaningful to the trainee if the PI "won't pay for the prospective to visit". The implication being that if a PI "won't" fly you out for a visit to the laboratory, this is a bad sign for the future training experience and of course all prospectives should strike that PI off their list.

This perspective was expressed by both established faculty and apparent trainees so it has currency in many stages of the training process from trainee to trainer.

It is underinformed.

I put "won't" in quotes above for a reason.

In many situations the PI simply cannot pay for travel visits for recruiting postdocs.

They may appear to be heavily larded with NIH research grants and still do not have the ability to pay for visits. This is, in the experience of me and others chiming in on the Twitts, because our institutional grants management folks tell us it is against the NIH rules. There emerged some debate about whether this is true or whether said bean counters are making an excuse for their own internal rulemaking. But for the main issue today, this is beside the point.

Some PIs cannot pay for recruitment travel from their NIH R01(s).

Not "won't". Cannot. Now as to whether this is meaningful for the training environment, the prospective candidate will have to decide for herself. But this is some fourth level stuff, IMO. PIs who have grants management which works at every turn to free them from rules are probably happier than those that have local institutional policies that frustrate them. And as I said at the top, it is better, all else equal, when postdocs can be consistently recruited with laboratory visits. But is the nature of the institutional interpretation of NIH spending rules a large factor against the offerings of the scientific training in that lab? I would think it is a very minor part of the puzzle.

There is another category of "cannot" which applies semi-independently of the NIH rule interpretation- the PI may simply not have the cash. Due to lack of a grant or lack of a non-Federal pot of funds, the PI may be unable to spend in the recruiting category even if other PIs at the institution can do so. Are these meaningful to the prospective? Well the lack of a grant should be. I think most prospectives that seek advice about finding a lab will be told to check into the research funding. It is kind of critical that there be enough for whatever the trainee wants to accomplish. The issue of slush funds is a bit more subtle but sure, it matters. A PI with grants and copious slush fundes may offer a better resourced training environment. Trouble is, that this comes with other correlated factors of importance. Bigger lab, more important jet-setting PI...these are going to be more likely to have extra resources. So it comes back to the usual trade-offs and considerations. In the face of that it is unclear that the ability to pay for recruiting is a deciding factor. It is already correlated with other considerations the prospective is wrestling with.

Finally we get to actual "will not". There are going to be situations where the PI has the ability to pay for the visit but chooses not to. Perhaps she has a policy never to do so. Perhaps he only pays for the top candidates because they are so desired. Perhaps she does this for candidates when there are no postdocs in the lab but not when there are three already on board. Or perhaps he doesn't do it anymore because the last three visitors failed to join the lab*.

Are those bad reasons? Are they reasons that tell the prospective postdoc anything about the quality of the future training interaction?

*Extra credit: Is it meaningful if the prospective postdoc realizes that she is fourth in line, only having been invited to join the lab after three other people passed on the opportunity?

4 responses so far

Should NIH provide a transcript of the discussion of grants?

Feb 16 2018 Published by under Grant Review, NIH funding

Respected neuroscientist Bita Moghaddam seems to think this would be a good idea.

She then goes on to mention the fact that POs listen in on grant discussion, can take notes and can give the PI a better summary of the discussion that emerges in the Resume of Discussion written by the SRO.

This variability in PO behavior then leads to some variability in the information communicated to the PI. I've had one experience where a PO gave me such chapter and verse on the discussion that it might have been slightly over the line (pre- and post-discussion scores). Maybe two other ones where the PO gave me a very substantial run down. But for the most part POs have not been all that helpful- either they didn't attend or they didn't pay attention that closely or they just didn't care to tell me anything past the "we suggest you revise and resubmit" mantra. She has a good point that it is not ideal that there is so much variability. When I've touched on this issue in the past, I've suggested this is a reason to cultivate as many POs as possible in your grant writing so that you have a chance of getting the "good" ones now and again. Would providing the transcript of discussion help? Maybe?

Or maybe we could just start lobbying the ICs of our fondest acquaintance to take the effort to make the POs behave more consistently.

But I have two problems with Professor Moghaddam's proposals. First of course, is the quashing effect that de-anonymizing (and while a transcript could still be anonymized it is in the same vein of making reviewers hesitate to speak up) may have on honest and open comment. The second problem is that it goes into reinforcing the idea that properly revising a grant application is merely "doing what they said to do". Which then should, the thinking goes, make the grant fundable next time.

This is, as you know, not the way the system is set to work and is a gut-feeling behavior of reviewers that the CSR works hard to counter. I don't know if having the transcript would help or hurt in this regard. I guess it would depend on the mindset of the PI when reading the transcript. If they were looking to merely suss out* the relative ratio of seriousness of various critiques perhaps this would be fine?

*My fear is that this would just feed the people who are looking to litigate their review to "prove" that they got screwed and deserve funding.

20 responses so far

NIH encourages pre-prints

In March of 2017 the NIH issued a notice on Reporting Preprints and Other Interim Research Products (NOT-OD-17-050): "The NIH encourages investigators to use interim research products, such as preprints, to speed the dissemination and enhance the rigor of their work.".

The key bits:

Interim Research Products are complete, public research products that are not final.

A common form is the preprint, which is a complete and public draft of a scientific document. Preprints are typically unreviewed manuscripts written in the style of a peer-reviewed journal article. Scientists issue preprints to speed dissemination, establish priority, obtain feedback, and offset publication bias.

Another common type of interim product is a preregistered protocol, where a scientist publicly declares key elements of their research protocol in advance. Preregistration can help scientists enhance the rigor of their work.

I am still not happy about the reason this happened (i.e., Glam hounds trying to assert scientific priority in the face of the Glam Chase disaster they themselves created) but this is now totally beside the point.

The NIH policy (see OpenMike blog entry for more) has several implications for grant seekers and grant holders which are what form the critical information for your consideration, Dear Reader.

I will limit myself here to materials that are related to standard paper publishing. There are also implications for materials that would never be published (computer code?) but that is beyond the scope for today's discussion.

At this point I will direct you to bioRxiv and PsyRxiv if you are unfamiliar with some of the more popular approaches for pre-print publication of research manuscripts.

The advantages to depositing your manuscripts in a pre-print form are all about priority and productivity, in my totally not humble opinion. The former is why the Glamour folks are all a-lather but priority and scooping affect all of us a little differently. As most of you know, scooping and priority is not a huge part of my professional life but all things equal, it's better to get your priority on record. In some areas of science it is career making/breaking and grant getting/rejecting to establish scientific priority. So if this is a thing for your life, this new policy allows and encourages you to take advantage.

I'm more focused on productivity. First, this is an advantage for trainees. We've discussed the tendency of new scientists to list manuscripts "in preparation" on their CV or Biosketch (for fellowship applications, say, despite it being technically illegal). This designation is hard to evaluate. A nearing-defense grad student who has three "in prep" manuscripts listed on the CV can appear to be bullshitting you. I always caution people that if they list such things they had better be prepared to send a prospective post-doc supervisor a mostly-complete draft. Well, now the pre-print allows anyone to post "in preparation" drafts so that anyone can verify the status. Very helpful for graduate students who have a short timeline versus the all too typical cycle of submission/rejection/resubmission/revision, etc. More importantly, the NIH previously frowned on listing "in preparation" or "in review" items on the Biosketch. This was never going to result in an application being returned unreviewed but it could sour the reviewers. And of course any rule followers out there would simply not list any such items, even if there was a minor revision being considered. With pre-print deposition and the ability to list on a NIH biosketch and cite in the Research Plan there is no longer any vaporware type of situation. The reviewer can look at the pre-print and judge the science for herself.

This applies to junior PIs as well. Most likely, junior PIs will have fewer publications, particularly from their brand new startup labs. The ability of the PI to generate data from her new independent lab can be a key issue in grant review. As with the trainee, the cycle of manuscript review and acceptance is lengthy compared with the typical tenure clock. And of course many junior PIs are trying to balance JIF/Glam against this evidence of independent productivity. So pre-print deposition helps here.

A very similar situation can apply to us not-so-junior PIs who are proposing research in a new direction. Sure, there is room for preliminary data in a grant application but the ability to submit data in manuscript format to the bioRxiv or some such is unlimited! Awesome, right?

15 responses so far

Rigor, reproducibility and the good kid

Feb 09 2018 Published by under Grant Review, NIH, NIH funding

I was the good kid.

In my nuclear family, in school and in pre-adult employment.

At one point my spouse was in a very large lab and observed how annoying it is when the PI reads everyone the riot act about the sins of a few lab-jerks.

Good citizens find it weird and off-putting when they feel criticized for the sins of others.

They find it super annoying that their own existing good behavior is not recognized.

And they are enraged when the jerko is celebrated for finally, at last managing to act right for once.

Many of us research scientists feel this way when the NIH explains what they mean by their new initiative to enhance "rigor and reproducibility".


"What? I already do that, so does my entire subfield. Wait.....who doesn't do that?" - average good-kid scientist response to hearing the specifics of the R&R initiative.

9 responses so far

SABV in NIH Grant Review

We're several rounds of grant submission/review past the NIH's demand that applications consider Sex As a Biological Variable (SABV). I have reviewed grants from the first round of this obligation until just recently and have observed a few things coming into focus. There's still a lot of wiggle and uncertainty but I am seeing a few things emerge in my domains of grants that include vertebrate animals (mostly rodent models).

1) It is unwise to ignore SABV.

2) Inclusion of both sexes has to be done judiciously. If you put a sex comparison in the Aim or too prominently as a point of hypothesis testing you are going to get the full blast of sex-comparisons review. Which you want to avoid because you will get killed on the usual- power, estrus effects that "must" be there, various caveats about why male and female rats aren't the same - behaviorally, pharmacokinetically, etc etc - regardless of what your preliminary data show.

3) The key is to include both sexes and say you will look at the data to see if there appears to be any difference. Then say the full examination will be a future direction or slightly modify the subsequent experiments.

4) Nobody seems to be fully embracing the SABV concept coming from the formal pronouncements about how you use sample sizes that are half males and half females into perpetuity if you don't see a difference. I am not surprised. This is the hardest thing for me to accept personally and I know for certain sure manuscript reviewers won't go for it either.

Then there comes the biggest categorical split in approach that I have noticed so far.

5a) Some people appear to use a few targeted female-including (yes, the vast majority still propose males as default and females as the SABV-satisfying extra) experiments to check main findings.

5b) The other take is just to basically double everything up and say "we'll run full groups of males and females". This is where it gets entertaining.

I have been talking about the fact that the R01 doesn't pay for itself for some time now.
A full modular, $250K per year NIH grant doesn't actually pay for itself.

the $250K full modular grant does not pay for itself. In the sense that there is a certain expectation of productivity, progress, etc on the part of study sections and Program that requires more contribution than can be afforded (especially when you put it in terms of 40 hr work weeks) within the budget.

The R01 still doesn't pay for itself and reviewers are getting worse

I have reviewed multiple proposals recently that cannot be done. Literally. They cannot be accomplished for the price of the budget proposed. Nobody blinks an eye about this. They might talk about "feasibility" in the sense of scientific outcomes or preliminary data or, occasionally, some perceived deficit of the investigators/environment. But I have not heard a reviewer say "nice but there is no way this can be accomplished for $250K direct".

Well, "we're going to duplicate everything in females" as a response to the SABV initiative just administered the equivalent of HGH to this trend. There is approximately zero real world dealing with this in the majority of grants that slap in the females and from what I have seen no comment whatever from reviewers on feasibility. We are just entirely ignoring this.

What I am really looking forward to is the review of grants in about 3 years time. At that point we are going to start seeing competing continuation applications where the original promised to address SABV. In a more general sense, any app from a PI who has been funded in the post-SABV-requirement interval will also face a simple question.

Has the PI addressed SABV in his or her work? Have they taken it seriously, conducted the studies (prelim data?) and hopefully published some things (yes, even negative sex-comparisons)?

If not, we should, as reviewers, drop the hammer. No more vague hand wavy stuff like I am seeing in proposals now. The PI had better show some evidence of having tried.

What I predict, however, is more excuse making and more bad faith claims to look at females in the next funding interval.

Please prove me wrong, scientists in my fields of study.

Additional Reading:
NIH's OER blog Open Mike on the SABV policies.
NIH Reviewer Guidance [PDF]

3 responses so far

Undue influence of frequent NIH grant reviewers

Feb 07 2018 Published by under Fixing the NIH, Grant Review, NIH, NIH Careerism, NIH funding

A quotation

Currently 20% of researchers perform 75-90% of reviews, which is an unreasonable and unsustainable burden.

referencing this paper on peer review appeared in a blog post by Gary McDowell. It caught my eye when referenced on the twitts.

The stat is referencing manuscript / journal peer review and not the NIH grant review system but I started thinking about NIH grant review anyway. Part of this is because I recently had to re-explain one of my key beliefs about a major limitation of the NIH grant review system to someone who should know better.

NIH Grant review is an inherently conservative process.

The reason is that the vast majority of reviews of the merit of grant applications are provided by individuals who already have been chosen to serve as Principal Investigators of one or more NIH grant awards. They have had grant proposals selected as meritorious by the prior bunch of reviewers and are now are contributing strongly to the decision about the next set of proposals that will be funded.

The system is biased to select for grant applications written in a way that looks promising to people who have either been selected for writing grants in the same old way or who have been beaten into writing grants that look the same old way.

Like tends to beget like in this system. What is seen as meritorious today is likely to be very similar to what has been viewed as meritorious in the past.

This is further amplified by the social dynamics of a person who is newly asked to review grants. Most of us are very sensitive to being inexperienced, very sensitive to wanting to do a good job and feel almost entirely at sea about the process when first asked to review NIH grants. Even if we have managed to stack up 5 or 10 reviews of our proposals from that exact same study section prior to being asked to serve. This means that new reviewers are shaped even more by the culture, expectations and processes of the existing panel, which is staffed with many experienced reviewers.

So what about those experienced reviewers? And what about the number of grant applications that they review during their assigned term of 4 (3 cycles per year, please) or 6 (2 of 3 cycles per year) years of service? With about 6-10 applications to review per round this could easily be highly influential (read: one of the three primary assigned reviewers) review of 100 applications. The person has additional general influence in the panel as well, both through direct input on grants under discussion and on the general tenor and tone of the panel.

When I was placed on a study section panel for a term of service I thought the SRO told us that empaneled reviewers were not supposed to be asked for extra review duties on SEPs or as ad hoc on other panels by the rest of the SRO pool. My colleagues over the years have disabused me of the idea that this was anything more than aspirational talk from this SRO. So many empaneled reviewers are also contributing to review beyond their home review panel.

My question of the day is whether this is a good idea and whether there are ethical implications for those of us who are asked* to review NIH grants.

We all think we are great evaluators of science proposals, of course. We know best. So of course it is all right, fair and good when we choose to accept a request to review. We are virtuously helping out the system!

At what point are we contributing unduly to the inherent conservativeness of the system? We all have biases. Some are about irrelevant characteristics like the ethnicity** of the PI. Some are considered more acceptable and are about our preferences for certain areas of research, models, approaches, styles, etc. Regardless these biases are influencing our review. Our review. And one of the best ways to counter bias is the competition of competing biases. I.e., let someone else's bias into the mix for a change, eh buddy?

I don't have a real position on this yet. After my term of empaneled service, I accepted or rejected requests to review based on my willingness to do the work and my interest in a topic or mechanism (read: SEPs FTW). I've mostly kept it pretty minimal. However, I recently messed up because I had a cascade of requests last fall that sucked me in- a "normal" panel (ok, ok, I haven't done my duty in a while), followed by a topic SEP (ok, ok I am one of a limited pool of experts I'll do it) and then a RequestThatYouDon'tRefuse. So I've been doing more grant review lately than I have usually done in recent years. And I'm thinking about scope of influence on the grants that get funded.

At some point is it even ethical to keep reviewing so damn much***? Should anyone agree to serve successive 4 or 6 year terms as an empaneled reviewer? Should one say yes to every SRO request that comes along? They are going to keep asking so it is up to us to say no. And maybe to recommend the SRO ask some other person who is not on their radar?

*There are factors which enhance the SRO pool picking on the same old reviewers, btw. There's a sort of expectation that if you have review experience you might be okay at it. I don't know how much SROs talk to each other about prospective reviewers and their experience with the same but there must be some chit chat. "Hey, try Dr. Schmoo, she's a great reviewer" versus "Oh, no, do not ever ask Dr. Schnortwax, he's toxic". There are the diversity rules that they have to follow as well- There must be diversity with respect to the geographic distribution, gender, race and ethnicity of the membership. So people that help the SROs diversity stats might be picked more often than some other people who are straight white males from the most densely packed research areas in the country working on the most common research topics using the most usual models and approaches.

**[cough]Ginther[cough, cough]

***No idea what this threshold should be, btw. But I think there is one.

18 responses so far

The past is prologue: Political NIH interference edition

Jan 24 2017 Published by under NIH, NIH funding, Science Politics, Science Vault

From a prestigious general science journal:

"Important elements in both Senate and the House are showing increasing dissatisfaction over Congress's decade-long honeymoon with medical research....critics are dissatisfied...with the NIH's procedures for supervising the use of money by its research grantees....NIH officials..argued, rather, that the most productive method in financing research is to pick good people with good projects and let them carry out their work without encumbering them...its growth has been phenomenal....[NIH director}: nor do we believe that most scientific groups in the country have an asking and a selling price for their product which is research activity...we get a realistic appraisal of what they need to do the job..the supervisory function properly belongs to the universities and other institutions where the research takes place....closing remarks of the report are:...Congress has been overzealous in appropriating money for health research".

D.S. Greenberg, Medical Research Funds: NIH Path Through Congress Has Developed Troublesome Bumps, Science 13 Jul 1962, Vol. 137, Issue 3524, pp. 115-119
DOI: 10.1126/science.137.3524.115 [link]
Previously posted.

9 responses so far

Older posts »