Archive for the 'NIH funding' category

Closeout funding

Feb 05 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

Within the past thre years or so I had a Program Officer mention the idea of "closeout funding" to me.

One of my top few flabbergasting moments as an extramural scientist.

It referred, of course, to them using program discretion to give a softer landing to one of their favored long-time PIs who had failed to get a fundable score on a competing renewal. It was said in a context that made it clear it was a regular thing in their decision space.

This explains an awful lot of strange R56 BRIDGE (to nowhere) awards, I thought to myself.

I bring this up because I think it relates to this week's discussion of the proposed "emeritus award" concept.

13 responses so far

Your Grant in Review: Credible

Jan 30 2015 Published by under NIH, NIH Careerism, NIH funding, Peer Review

I am motivated to once again point something out.

In ALL of my advice to submit grant applications to the NIH frequently and on a diversity of topic angles, there is one fundamental assumption.

That you always, always, always send in a credible application.

That is all.

17 responses so far

These ILAF types just can't help sounding selfishly elitist, can they?

Good Gravy.

One David Korn of the Massachusetts General Hospital and Harvard Medical School has written a letter to Nature defending the indirect cost (IDC; "overhead") rates associated with NIH grants. It was submitted in response to a prior piece in Nature on IDC which was, to my eye, actually fairly good and tended to support the notion that IDC rates are not exorbitant.

But overall, the data support administrators’ assertions that their actual recovery of indirect costs often falls well below their negotiated rates. Overall, the average negotiated rate is 53%, and the average reimbursed rate is 34%.

The original article also pointed out why the larger private Universities have been heard from loudly, while the frequent punching-bag smaller research institutes with larger IDC rates are silent.

Although non-profit institutes command high rates, together they got just $611 million of the NIH’s money for indirect costs. The higher-learning institutes for which Nature obtained data received $3.9 billion, with more than $1 billion of that going to just nine institutions, including Johns Hopkins University in Baltimore, Maryland, and Stanford (see ‘Top 10 earners’).

Clearly Dr. Korn felt that this piece needed correction:

Aspects of your report on US federal funding of direct research costs and the indirect costs of facilities and administration are misleading (Nature 515, 326–329; 2014).

Contrary to your claim, no one is benefiting from federal largesse. Rather, the US government is partially reimbursing research universities for audit-verified indirect costs that they have already incurred.

Ok, ok. Fair enough. At the very least it is fine to underline this point if it doesn't come across in the original Nature article to every reader.

The biomedical sciences depend on powerful technologies that require special housing, considerable energy consumption, and maintenance. Administration is being bloated by federal regulations, many of which dictate how scientists conduct and disseminate their research. It is therefore all the more remarkable that the share of extramural research spending on indirect costs by the US National Institutes of Health (NIH) has been stable at around 30% for several decades.

Pretty good point.

But then Korn goes on to step right in a pile.

Negotiated and actual recovery rates for indirect costs vary across the academic community because federal research funding is merit-based, not a welfare programme.

You will recognize this theme from a prior complaint from Boston-area institutions.

“There’s a battle between merit and egalitarianism,” said Dr. David Page, director of the Whitehead Institute, a prestigious research institution in Cambridge affiliated with MIT.

Tone deaf, guys. Totally tone deaf. Absolutely counter-productive to the effort to get a majority of Congress Critters on board with support for the NIH mission. Hint: Your various Massachusetts Critters get to vote once, just like the Critters from North and South Dakota, Alabama and everywhere else that doesn't have a huge NIH-funded research enterprise.

And why Korn chooses to use a comment about IDC rates to advance this agenda is baffling. The takeaway message is that he thinks that higher IDC rates are awarded because His Awesome University deserves it due to the merit of their research. This totally undercuts the point he is trying to make, which is presumably "institutions may be private or public, urban or rural, with different structures, sizes, missions and financial anatomies.".

I just don't understand people who are this clueless and selfish when it comes to basic politics.

23 responses so far

NIGMS will now consider PIs' "substantial unrestricted research support"

According to the policy on this webpage, the NIGMS will now restrict award of its grants when the applicant PI has substantial other research support. It is effective as of new grants submitted on or after 2 Jan, 2015.

The clear statement of purpose:

Investigators with substantial, long-term, unrestricted research support may generally hold no more than one NIGMS research grant.

The detail:

For the purposes of these guidelines, investigators with substantial, long-term, unrestricted support (“unrestricted investigators”) would have at least $400,000 in unrestricted support (direct costs excluding the principal investigator’s salary and direct support of widely shared institutional resources, such as NMR facilities) extending at least 2 years from the time of funding the NIGMS grant. As in all cases, if NIGMS funding of a grant to an investigator with substantial, long-term, unrestricted support would result in total direct costs from all sources exceeding $750,000, National Advisory General Medical Sciences Council approval would be required

This $400,000 limit, extending for two years would appear to mean $200,000 per year in direct costs? So basically the equivalent of a single additional R01-worth of direct cost funding?

I guess they are serious about the notion that two grants is fine but three-R01-level funding means you are a greedy commons-spoiling so-and-so.

51 responses so far

More on NIGMS's call for "shared responsibility"

Jan 07 2015 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

The post from NIGMS Director Lorsch on "shared responsibility" (my blog post) has been racking up the comments, which you should go read.

As a spoiler, it is mostly a lot of the usual, i.e., Do it to Julia!

But two of the comments are fantastic. This one from anonymous really nails it down to the floor.

More efficient? My NOA for my R01 came in a few weeks ago for this year, and as usual, it has been cut. I will get ~$181,000 this year. Let’s break down the costs of running a typical (my) lab to illustrate that which is not being considered. I have a fairly normal sized animal colony for my field, because in immunology, nothing gets published well without knockouts and such. That’s $75,000 a year in cage per diem costs. Let’s cover 20% of my salary (with fringe, at 28.5%), one student, and one postdoc (2.20 FTE total). Total salary costs are then $119,800. See, I haven’t done a single experiment and my R01 is gone. How MORE efficient could I possibly be? Even if we cut the animals in half, I have only about $20,000 for the entire year for my reagents. Oh no, you need a single ELISA kit? That’s $800. That doesn’t include plates? Hell, that’s another $300. You need magnetic beads to sort cells, that’s $800 for ONE vial of beads. Wait, that doesn’t include the separation tubes? Another $700 for a pack. You need FACS sort time? That’s $100 an hour. Oh no, it takes 4 hours to sort cells for a single experiment? Another $400. It’s easy to spend $1500 on a single experiment given the extreme costs of technology and reagents, especially when using mice. Then, after 4 years of work, you submit your study (packed into a single manuscript) for publication and the reviewers complain that you didn’t ALSO use these 4 other knockout mice, and that the study just isn’t complete enough for their beloved journal. And you (the NIH) want me to be MORE efficient? I can’t do much of anything as it is.

Anyone running an academic research laboratory should laugh (or vomit) at the mere suggestion that most are not already stretching every penny to its breaking point and beyond.

This is what is so phenomenally out of touch with Lorsch's concentration on the number of grants a PI holds. Most of us play in the full-modular space. Even for people with multiple grants that have one that managed to get funded with a substantial upgrade from full-mod, they are going to have other ones at the modular limit. And even the above-modular grants often get cut extra compared with the reductions that are put on the modular-limit awards.

The full-modular has not been adjusted with inflation. And the purchasing power is substantially eroded compared with a mere 15 years ago when they started the new budgeting approach.


[this graph depicts the erosion of purchasing power of the $250K/yr full-modular award in red and the amount necessary to maintain purchasing power in black. Inflation adjustment used was the BRDPI one]

Commenter Pinko Punko also has a great observation for Director Lorsch.

The greatest and most massive inefficiency in the system is the high probability of a funding gap for NIGMS (and all other Institute) PIs. Given that gaps in funding almost always necessitate laying off staff, and prevent long-term retention of expertise, the great inefficiency here is that expertise cannot possibly be “on demand”. I know that you are also aware that given inflation, the NIH budget never actually doubled. There has likely been a PI bubble, but it is massively deflating with a huge cost.

The lowest quantum for funding units in labs is 1. Paylines are so low, it seems the only way to attempt to prevent a gap in funding is to have an overlap at some point, because going to zero is a massive hit when research needs to grind to a halt. It is difficult to imagine that there is a large number of excessively funded labs.

While I try to put a positive spin on the Datahound analysis showing the probability of a PI becoming re-funded after losing her last NIH award, the fact is that 60% of PIs do not return to funding. A prior DataHound post showed that something between 14-30% of PIs are approximately continuously-funded (extrapolating generously here from only 8 years of data). Within these two charts there is a HUGE story of the inefficiency of trying to maintain that funding for the people who will, in the career-long run, fall into that "continuously funded" category.

This brings me to the Discussion point of the day. Lorsch's blog post is obsessed with efficiency. which he asserts comes with modestly sized research operations, indexed approximately by the number of grant awards. Three R01s being his stated threshold for excessive grants even though he cites data showing that $700K per year in direct costs is the most productive* amount of funding- i.e., three grants at a minimum.

I have a tale for you, Dear Reader. The greatly abridged version, anyway.

Once upon a time the Program Staff of ICx decided that they were interested in studies on Topic Y and so they funded some grants. Two were R01s obtained without revision. They sailed on for their full duration of funding. To my eye, there was not one single paper that resulted that was specific to the goals of Topic Y and damn little published at all. Interestingly there were other projects also funded on Topic Y. One of them required a total of 5 grant applications and was awarded a starter grant, followed by R01 level funding. This latter project actually produced papers directly relevant to Topic Y.

Which was efficient, Director Lorsch?
How could this process have been made more efficient?

Could we predict PI #3 was the one that was going to come through with the goods? Maybe we should have loaded her up with the cash and screw the other two? Could we really argue that funding all three on a shoestring was more efficient? What if the reason that the first two failed is that they just didn't have enough cash at the start to make a good effort on what was, obviously, a far from easy problem to attack.

Would it be efficient to take this scenario and give PI #3 a bunch of "people-not-projects" largesse at this point in time because she's proved able to move the scientific ball on this? Do we look at the overall picture and say "in for a penny, in for a pound"? Or do we fail to learn a damn thing and let the productive PI keep fighting against the funding cycles, the triage line and what not to keep the program going under our current approaches?

It may sound like I am leaning in one direction on this but really, I'm not. I don't know what the answer is. The distribution of success/failure across these three PIs could have been entirely different. As it happens, all three are pretty dang decent scientists. The degree to which they looked like they could kick butt on Topic Y at the point of funding their respective projects definitely didn't support the supremacy of PI#3 in the end analysis. But noobs can fail too. Sometimes spectacularly. Sometimes, as may have been the case in this little tale, people can fail because they simply haven't grown their lab operations large enough, fast enough to sustain a real program, particularly when one of the projects is difficult.

I assume, as usual, that this narrow little anecdote is worth relating because these are typical scenarios. Maybe not hugely common but not all that rare either. Common enough that a Director of an IC should be well aware.

When you have an unhealthy interest in the grant game, as do I, you notice this stuff. You can see it play out in RePORTER and PubMed. You can see it play out as you try to review competing-continuation proposals on study section. You see it play out in your sub-fields of interest and with your closer colleagues.

It makes you shake your head in dismay when someone makes assertions that they know how to make the NIH-funded research enterprise more efficient.

UPDATE: I realized that I should really should say that the third project required at least five applications since I'm going by the amended status of the *funded* awards. It is unknown if there were unfunded apps submitted. It is also unknown if either of the first two PIs tried to renew the awards and couldn't get a fundable score. I think I should also note that the third project was awarded funding in a context that triggers on at least three of the major "this is the real problem" complaints being bandied in Lorsch's comments section. The project that produced nothing at all, relevant or not, was awarded in a context that I think would align with these complainants "good" prescription. FWIW.

__
*there are huge problems even with this assessment of productivity but we'll let that slide for now.

66 responses so far

NIGMS blog post on "shared responsibility"

Jan 05 2015 Published by under NIH, NIH Careerism, NIH funding

This is a fascinating read.

Jon Lorsch, current NIGMS Director, spreads around so much total nonsense that I just can't even deal.

And journals, professional societies and private funding organizations should examine the roles they can play in helping to rewire the unproductive incentive systems that encourage researchers to focus on getting more funding than they actually need.

Riiiight. We PIs out here in extramural land are focused on getting more grant money than we feel that we need. Because what? We enjoy grant writing? Is this guy nuts? When I feel like I have enough grant support to keep what is really a very modest operation afloat I quit writing grants! The problem is that Director Lorsch is really, really out of touch with what a PI in today's climate actually needs. "Unproductive incentive systems"? Dude, when NIGMS stops giving grants to anyone who publishes in Science, Nature or Cell, and starts beancounting Supplementary Data for reduced publication output, and punishes PIs for failing to publish data that is "scooped" or "not hot enough" etc, then maybe I will take you seriously. Jeepers. LOOK IN THE MIRROR, NIH!!!!!!!

But to achieve this increase, we must all be willing to share the responsibility and focus on efficiency as much as we have always focused on efficacy. In the current zero-sum funding environment, the tradeoffs are stark: If one investigator gets a third R01, it means that another productive scientist loses his only grant or a promising new investigator can’t get her lab off the ground. Which outcome should we choose?

better to have everyone funded at $50K and sitting around doing nothing, right?

Although certain kinds of research projects—particularly those with an applied outcome, such as clinical trials—can require large teams, a 2010 analysis by NIGMS and a number of subsequent studies of other funding systems (Fortin and Currie, 2013; Gallo et al., 2014) have shown that, on average, large budgets do not give us the best returns on our investments in basic science.

The "2010 analysis" has been discussed here, I recall. It's flawed. It fails to recognize the cost of a Glamour Pub- love or hate, we have to admit that it takes a rich lab to play in that arena. One pub to the accountants has like 6-10 pubs worth of time/effort and probably data (buried in the Supplemental Materials). It fails to recognize there are going to be some scientific advances that simply cannot be accomplished for less. It fails to recognize the "efficiencies" and lack thereof associated with continued funding versus the scary roller coaster of a funding gap.

and Lorsch does a little neat ju-jitsu with this post. Berg's analysis concluded that $700K was the peak of productivity. That is three concurrent full modular R01s. Even with a traditional budget award the PI has to have two of them awarded at $350K per year (and we know it really means more than that because of cuts) to hit this level. So the finger pointing at the investigator who "gets a third R01" doesn't even square with his own citation on "efficiency", now does it?

Furthermore, the larger a lab gets, the more time the principal investigator must devote to writing grants and performing administrative tasks, further reducing the time available for actually doing science.

Good GRAVY man! Do you have any idea what the hell time it is on PI street? A one-grant lab PI is constantly on the edge of disaster and the PI is always writing furiously to increase that funding level to where she can finally breathe for six months. Protocols and registrations most often are one per lab, so that "administrative tasks" claim is also nonsense. It is the smaller PI who spends more effort per-grant on keeping the approvals in place.

These and other lines of evidence indicate that funding smaller, more efficient research groups will increase the net impact of fundamental biomedical research: valuable scientific output per taxpayer dollar invested.

[citation needed]

It may sound truthy but it is not at all obvious that this is the case. "More efficient" is tautological here but the smaller=efficient is not proven. At all. Especially when you are talking about the longer term- 30 years of a career. There is also the strong whiff of Magic Unicorn Leprechaun money about this comment.

My main motivation for writing this post is to ask the biomedical research community to think carefully about these issues.

You know what I would really like to ask? For the NIH to actually think carefully about these issues. Starting with Director Lorsch.

But reshaping the system will require everyone involved to share the responsibility.

Somehow I doubt he means this. Is there any evidence that NIGMS actually denies the super awesome PIs their extra R01s? Is there any evidence they do anything more than handwring about HHMI investigators with NIGMS grants? Is there any evidence they mean to face down the powerful first and make them equal to the rest of the drones before they take it out of the hide of the less-powerful? HA!

h/t: PhysioProf

23 responses so far

The new NIH Biosketch is here

Dec 02 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism, NIH funding

The NIH has notified us (NOT-OD-15-024) that as of Jan 25, 2015 all grant applications will have to use the new Biosketch format (sample Word docx).
[ UPDATE 12/05/14: The deadline has been delayed to apply to applications submitted after May 25, 2015 ]

The key change is Section C: Contribution to Science, which replaces the previous list of 15 publications.

C. Contribution to Science
Briefly describe up to five of your most significant contributions to science. For each contribution, indicate the historical background that frames the scientific problem; the central finding(s); the influence of the finding(s) on the progress of science or the application of those finding(s) to health or technology; and your specific role in the described work. For each of these contributions, reference up to four peer-reviewed publications or other non-publication research products (can include audio or video products; patents; data and research materials; databases; educational aids or curricula; instruments or equipment; models; protocols; and software or netware) that are relevant to the described contribution. The description of each contribution should be no longer than one half page including figures and citations. Also provide a URL to a full list of your published work as found in a publicly available digital database such as SciENcv or My Bibliography, which are maintained by the US National Library of Medicine.

The only clear win that I see here is for people who contribute to science in a way that is not captured in the publication record. This is captured by the above suggestions of non-publication products which previously had no place other than the Personal Statement. I see this as a good move for those who fall into this category.

For the regular old run-of-the-mill Biosketches, I am not certain this addresses any of the limitations of the prior system. And it clearly hurts in a key way.

One danger I see lying ahead is that the now-necessary bragging about significant contributions may trigger 1) arguments over the validity of the claim and 2) ill will about the almost inevitable overshadowing of the other people who also made related contributions. The example biosketch leads with a claim to having "changed the standards of care for addicted older adults". This is precisely the sort of claim that is going to be argumentative. There is no way that a broad sweeping change of clinical care rests on the work of one person. No way, no how.

If the Biosketch says "we're one of twenty groups who contributed...", well, this is going to look like you are a replaceable cog. Clearly you can't risk doing that. So you have risks ahead of you in trying to decide what to claim.

The bottom line here is that you are telling reviewers what they are supposed to think about your pubs, whereas previously they simply made their own assumptions. It has upside for the reviewer who is 1) positively disposed toward the application and 2) less familiar with your field but man......it really sets up a fight.

Another thing I notice is the swing of the pendulum. Some time ago, publications were limited to 15 which placed a high premium on customizing the Biosketch to the specific application at hand. This swings back in the opposite direction because it asks for Contribution to Science not Contribution to the Relevant Subfield. The above mentioned need to brag about unique awesomeness also shifts the emphasis to the persons entire body of work rather than that work that is most specific to the project at hand. On this factor, I am of less certain opinion about the influence on review.

Things that I will be curious to see develop.

GlamourMag- It will be interesting to see how many people say, in essence, that such and such was published in a high JIF journal so therefore it is important.

Citations and Alt-metrics- Will people feel it necessary to defend the claims to a critical contribution by pointing out how many citations their papers have received? I think this likely. Particularly since the "non-publication research products" have no conventional measures of impact, people will almost have to talk about downloads of their software, Internet traffic hits to their databases, etc. So why not do this for publications as well, eh?

Figures- all I can say is "huh"?

Sally Rockey reports on the pilot study they conducted with this new Biosketch format.

While reviewers and investigators had differing reactions to the biosketch, a majority of both groups agreed that the new biosketch was an improvement over the old version. In addition, both groups felt that the new format helped in the review process. Both applicants and reviewers expressed concerns, however, about the suitability of the new format for new investigators, but interestingly, investigators who were 40 years and older were more negative than those below age 40.

So us old folks are more concerned about the effects on the young than are the actual young. This is interesting to me since I'm one who feels some concern about this move being bad for less experienced applicants.

I'll note the first few comments posted to Rockey's blog are not enthusiastic about the pilot data.

87 responses so far

Expertise versus consistency

Nov 24 2014 Published by under Grant Review, NIH, NIH funding

In NIH grant review the standing study section approach to peer review sacrifices specific expertise for the sake of consistency of review.

When each person has 10 R01s to review, the odds are that he or she is not the most specifically qualified person for all 10 are high.

The process often brings in additional panel members to help cover scientific domains on a per-meeting basis but this is only partially effective.

The Special Emphasis Panel can improve on this but mostly it does so because the scope of the applications under review is narrower. Typically the members of an SEP still have to stretch a bit to review some of the assignments.

Specific expertise sounds good but can come at the cost of consistency. Score calibration is a big deal. You should have seen the look of horror on my face at dinner following my first study section when some guy said "I thought I was giving it a really good a
score...you guys are telling me that wasn't fundable?"

Imagine a study section with a normal sized load of apps in which each reviewer completes only one or two reviews. The expertise would be highly customized on each proposal but there might be less consistency and calibration across applications.

What say you, Dear Reader? How would you prefer to have your grants reviewed?

25 responses so far

Top down or bottom up? NIH RFAs are a two-way discussion between Program and Investigators

One of the erroneous claims made by Steven McKnight in his latest screed at the ASBMB President's space has to do with the generation of NIH funding priorities. Time will tell whether this is supposed to be a pivot away from his inflammatory comments about the "riff raff" that populate the current peer review study sections or whether this is an expansion of his "it's all rubbish" theme. Here he sets up a top-down / bottom-up scenario that is not entirely consistent with reality.

When science funding used to be driven in a bottom-up direction, one had tremendous confidence that a superior grant application would be funded. Regrettably, this is no longer the case. We instead find ourselves perversely led by our noses via top-down research directives coming from the NIH in the form of requests for proposals and all kinds of other programs that instruct us what to work on instead of asking us what is best.

I find it hard to believe that someone who has been involved with the NIH system as long as McKnight is so clueless about the generation of funding priorities within the NIH.

Or, I suppose, it is not impossible that my understanding is wrong and jumps to conclusions that are unwarranted.

Nevertheless.

Having watched the RFAs that get issued over the years in areas that are close to my own interests, having read the wording very carefully, thought hard about who does the most closely-related work and seeing afterwards who is awarded funding... it is my belief that in many, many cases there is a dialog between researchers and Program that goes into the issuance of a specific funding announcement.

Since I have been involved directly in beating a funding priority drum (actually several instruments have been played) with the Program staff of a particular IC in the past few years and they finally issued a specific Funding Opportunity Announcements (FOA) which has text that looks suspiciously similar to stuff that I have written, well, I am even further confident of my opinion.

The issuance of many NIH RFAs, PAs and likely RFPs is not merely "top-down". It is not only a bunch of faceless POs sitting in their offices in Bethesda making up funding priorities out of whole cloth.

They are generating these ideas in a dialog with extramural scientists.

That "dialog" has many facets to it. It consists of the published papers and review articles, conference presentations, grant applications submitted (including the ones that don't get funded), progress reports submitted, conversations on the phone or in the halls at scientific meetings. These are all channels by which we, the extramural scientists, are convincing the Program staff of what we think is most important in our respective scientific domains. If our arguments are good enough, or we are joined by enough of our peers and the Program Staff agree there is a need to stimulate applications (PAs) or secure a dedicated pool of funding (RFAs, PASs) then they issue one of their FOA.

Undoubtedly there are other inputs that stimulate FOAs from the NIH ICs. Congressional interest expressed in public or behind the scenes. Agenda from various players within the NIH ICs. Interest groups. Companies. Etc.

No doubt. And some of this may result in FOAs that are really much more consistent with McKnight's charge of "...programs that instruct us what to work".

But to suggest that all of the NIH FOAs are only "top-down" without recognizing the two-way dialog with extramural scientists is flat out wrong.

15 responses so far

Rockey looking to leave the NIH

Nov 03 2014 Published by under NIH, NIH Careerism, NIH funding

It looks like Sally Rockey, Deputy Director in charge of NIH's Office of Extramural Research since 2005, is looking to depart the NIH.

and

show that she is on the short-list to become the next President of the University (system) of Nebraska. Other shortlist candidates are a state level commissioner of higher education, a State University President and a State University (system) chancellor.

In the US some State University systems (i.e., multiple campuses which act as semi-autonomous Universities) call their campus heads President and the System-wide leader the Chancellor whereas other systems reverse these titles. This job appears to be the system-wide leadership position. This explains why there are two system-level leaders in the hunt.

It also may influence your opinion on the appropriateness of someone who has been a research administrator her whole career being in the running for such a position. Obviously she is being looked at as some sort of Federal grant rainmaker/expert to upgrade the amount of money that enters the University of Nebraska system from the Federal government and possibly other sources. I cannot imagine why else such a person, with no related experience heading a University or University system would be on the shortlist otherwise.

The main point of this news can be summed up in this handy figure from Jimmy Margulies, New Jersey Record, who was commenting on a different topic. The point remains, however.
The NIH is a sinking ship. I suspect that the folks at NIH realize this and the ones who have opportunity to cash in on their authoritah! by finding a nice top level administrative gig at one of the supplicant Universities will do so. The have-not Universities which find themselves in the most difficulty obtaining NIH funding will be desperate to land a rain-maker and even the "have" Universities may see this as a good investment. Especially if you have an IC Deputy Director or better, you can argue that they have significant administrative experience within an organization not entirely unrelated to academics. It should be an easy sell for a search committee to make the argument for NIH insiders to be considered for University President positions, Deans of Research and the like.

Is it a smart move? Well yes, if you think that their will be some benefit to their insider status. If you think that the replacement figures and holdovers will take the calls of these NIH emigres and listen to the concerns of their new University.

UPDATE: This news account explains that an attempt to close a Nebraska open-records law was made when the previous President of the UN system resigned.

As the law stands now, candidates may be kept private until the search for a president is narrowed to a pool of at least four applicants, all of whom must be disclosed. The bill would have allowed search committees to keep confidential presidential, vice presidential and chancellor candidates until they’ve narrowed the pool to one finalist.

Proponents of the bill say a closed search would allow for a better pool of applicants, including those who may otherwise be hesitant to apply and jeopardize their current position by publicly seeking another one. Opponents say the current law allows for students, faculty, the general public and the media to meet, investigate and learn about the candidates.

Hadley introduced the bill on behalf of the University of Nebraska’s Board of Regents after President James Milliken announced last month that he would be leaving Nebraska to become chancellor of the City University of New York.

52 responses so far

« Newer posts Older posts »