Archive for the 'NIH funding' category

NIH policy on A2 as A0 that I didn't really appreciate.

Jul 26 2018 Published by under Grantsmanship, NIH, NIH funding

The NOT-OD-18-197 this week seeks to summarize policy on the submission of revised grant applications that has been spread across multiple prior notices. Part of this deals with the evolved compromise where applicants are only allowed to submit a single formal revision (the -xxA1 version) but are not prohibited from submitting a new (-01, aka another A0 version) one with identical content, Aims, etc.

Addendum A emphasizes rules for compliance with Requirements for New Applications. The first one is easy. You are not allowed an extra Introduction page. Sure. That is what distinguishes the A1, the extra sheet for replying.

After that it gets into the weeds. Honestly I would have thought this stuff all completely legal and might have tried using it, if the necessity ever came up.

The following content is NOT allowed anywhere in a New A0 Application or its associated components (e.g., the appendix, letters of support, other attachments):

Introduction page(s) to respond to critiques from a previous review
Mention of previous overall or criterion scores or percentile
Mention of comments made by previous reviewers
Mention of how the application or project has been modified since its last submission
Marks in the application to indicate where the application has been modified since its last submission
Progress Report

I think I might be most tempted to include prior review outcome? Not really sure and I've never done this to my recollection. Mention of prior comments? I mean I think I've seen this before in grants. maybe? Some sort of comment about prior review that did not mean the revision series.

Obviously you can accomplish most of this stuff within the letter of the law by not making explicit mention or marking of revision or of prior comments. You just address the criticisms and if necessary say something about "one might criticize this for...but we have proposed....".

The Progress Report prohibition is a real head scratcher. The Progress Report is included as a formal requirement with a competing continuation (renewal in modern parlance) application. But it has to fit within the page limits, unlike either an Introduction or a List of Publications Resulting (also an obligation of renewals apps) which gets you extra pages.

But the vast majority of NIH R01s include a report on the progress made so far. This is what is known as Preliminary Data! In the 25 page days, I tended to put Preliminary Data in a subsection with a header. Many other applications that I reviewed did something similar. It might as well have been called the Progress Report. Now, I sort of spread Preliminary Data around the proposal but there is a degree to which the Significance and Innovation sections do more or less form a report on progress to date.

There are at least two scenarios where grant writing behavior that I've seen might run afoul of this rule.

There is a style of grant writer that loves to place the proposal in the context of their long, ongoing research program. "We discovered... so now we want to explore....". or "Our lab focuses on the connectivity of the Physio-Whimple nucleus and so now we are going to examine...". The point being that their style almost inevitably requires a narrative that is drawn from the lab as a whole rather than any specific prior interval of funding. But it still reads like a Progress Report.

The second scenario is a tactical one in which a PI is nearing the end of a project and chooses to continue work on the topic area with a new proposal rather than a renewal application. Maybe there is a really big jump in Aims. Maybe it hasn't been productive on the previously proposed Aims. Maybe they just can't trust the timing and surety of the NIH renewal proposal process and need to get a jump on the submission date. Given that this new proposal will have some connection to the ongoing work under a prior award, the PI may worry that the review panel will balk at overlap. Or at anticipated overlap because they might assume the PI will also be submitting a renewal application for that existing funding. In the old days you could get 2 or 3 R01 more or less on the same topic (dopamine and stimulant self-administration, anyone?) but I think review panels are unkeen on that these days. They are alert to signs of multiple awards on too-closely-related topics. IME anyway. So the PI might try to navigate the lack of overlap and/or assure the reviewers that there is not going to be a renewal of the other one in some sort of modestly subtle way. This could take the form of a Progress Report. "We made the following progress under our existing R01 but now it is too far from the original Aims and so we are proposing this as a new project.." is something I could totally imagine writing.

But as we know, what makes sense to me for NIH grant applications is entirely beside the point. The NOT clarifies the rules. Adhere to them.

7 responses so far

Your Grant in Review: Scientific Premise

Jul 03 2018 Published by under NIH Careerism, NIH funding

Scientific premise has become the latest headache of uncertainty in NIH grant crafting and review. You can tell because the NIH keeps having to issue clarifications about what it is, and is not. The latest is from Office of Extramural Research honcho Mike Lauer at his blog:

Clarifying what is meant by scientific premise
Scientific premise refers to the rigor of the prior research being cited as key support for the research question(s). For instance, a proposal might note prior studies had inadequate sample sizes. To help both applicants and reviewers describe and assess the rigor of the prior research cited as key support for the proposal, we plan to revise application instructions and review criteria to clarify the language.

Under Significance, the applicant will be asked to describe the strengths and weaknesses in the rigor of the prior research (both published and unpublished) that serves as the key support for the proposed project. Under Approach, the applicant will be asked to describe plans to address weaknesses in the rigor of the prior research that serves as the key support for the proposed project. These revisions are planned for research and mentored career development award applications that come in for the January 25, 2019 due date and beyond. Be on the lookout for guide notices.

My first thought was...great. Fan-friggin-tastic.

You are going to be asked to be more pointed about how the prior research all sucks. No more just saying things about too few studies, variances between different related findings or a pablum offer that it needs more research. Oh no. You are going to have to call papers out for inadequate sample size, poor design, bad interpretation, using the wrong parameters or reagents or, pertinent to a recent twitter discussion, running their behavioral studies in the inactive part of the rodent daily cycle.

Now I don't know about all of y'all, but the study sections that review my grants have a tendency to be populated with authors of papers that I cite. Or by their academic progeny or mentors. Or perhaps their tight science homies that they organize symposia and conferences with. Or at the very least their subfield collective peeps that all use the same flawed methods/approaches.

The SABV requirement has, quite frankly, been bad ENOUGH on this score. I really don't need this extra NIH requirement to be even more pointed about the limitations of prior literature that we propose to set about addressing with more studies.

2 responses so far

On PO/PI interactions to steer the grant to the PI's laboratory

Jun 18 2018 Published by under Alcohol, NIH, NIH funding

There has been a working group of the Advisory Committee to the Director (of NIH, aka Francis Collins) which has been examining the Moderate Alcohol and Cardiovascular Health Trial in the wake of a hullabaloo that broke into public earlier this year. Background on this from Jocelyn Kaiser at Science, from the NYT, and the WaPo. (I took up the sleazy tactics of the alleged profession of journalism on this issue here.)

The working group's report is available now [pdf].

Page 7 of that report:

There were sustained interactions (from at least 2013) between the eventual Principal Investigator (PI) of the MACH trial and three members of NIAAA leadership prior to, and during development of, FOAs for planning and main grants to fund the MACH trial

These interactions appear to have provided the eventual PI with a competitive advantage not available to other applicants, and effectively steered funding to this investigator

Page 11:

NIH Institutes, Centers, and Offices (ICOs) should ensure that program staff do not inappropriately provide non-public information, or engage in deliberations that either give the appearance of, or provide, an advantage to any single, or subset of, investigator(s)

The NIH should examine additional measures to assiduously avoid providing, or giving the appearance of providing, an advantage to any single, or subset of, investigator(s) (for example, in guiding the scientific substance of preparing grant applications or responding to reviewer comments)

The webcast of the meeting of the ACD on Day 2 covers the relevant territory but is not yet available in archived format. I was hoping to find the part where Collins apparently expressed himself on this topic, as described here.

In the wake of the decision, Collins said NIH officials would examine other industry-NIH ties to make sure proper procedures have been followed, and seek out even “subtle examples of cozy relationships” that might undermine research integrity.

When I saw all of this I could only wonder if Francis Collins is familiar with the RFA process at the NIH.

If you read RFAs and take the trouble to see what gets funded out of them you come to the firm belief that there are a LOT of "sustained interactions" between the PO(s) that are pushing the RFA and the PI that is highly desired to be the lucky awardee. The text of the RFAs in and of themselves often "giv(e) the appearance of providing, and advantage to any single, or subset of, investigator(s)". And they sure as heck provide certain PIs with "a competitive advantage not available to other applicants".

This is the way RFAs work. I am convinced. It is going to take on huge mountain of evidence to the contrary to counter this impression which can be reinforced by looking at some of the RFAs in your closest fields of interest and seeing who gets funded and for what. If Collins cares to include failed grant applications from those PIs that lead up to the RFA being generated (in some cases) I bet he finds that this also supports the impression.

I really wonder sometimes.

I wonder if NIH officialdom is really this clueless about how their system works?

...or do they just have zero compunction about dissembling when they know full well that these cozy little interactions between PO and favored PI working to define Funding Opportunity Announcements are fairly common?

__
Disclaimer: As always, Dear Reader, I have related experiences. I've competed unsuccessfully on more than one occasion for a targeted FOA where the award went to the very obvious suspect lab. I've also competed successfully for funding on a topic for which I originally sought funding under those targeted FOAs- that takes the sting out. A little. I also suspect I have at least once received grant funding that could fairly be said to be the result of "sustained interactions" between me and Program staff that provided me "a competitive advantage" although I don't know the extent to which this was not available to other PIs.

10 responses so far

The Purchasing Power of the NIH Grant Continues to Erode

It has been some time since I made a figure depicting the erosion of the purchasing power of the NIH grant so this post is simply an excuse to update the figure.

In brief, the NIH modular budget system used for a lot of R01 awards limits the request to $250,000 in direct costs per year. A PI can ask for more but they have to use a more detailed budgeting process, and there are a bunch of reasons I'm not going to go into here that makes the "full-modular" a good starting point for discussion of the purchasing power of the typical NIH award.

The full modular limit was put in place at the inception of this system (i.e., for applications submitted after 6/1/1999) and has not been changed since. I've used the FY2001 as my starting point for the $250,000 and then adjusted it in two ways according to the year by year BRDPI* inflation numbers. The red bars indicate the reduction in purchasing power of a static $250,000 direct cost amount. The black bars indicate the amount the full-modular limit would have to be escalated year over year to retain the same purchasing power that $250,000 conferred in 2001.


(click to enlarge)

The executive summary is that the NIH would have to increase the modular limit to $450,000 $400,000** per year in direct costs for FY2018 in order for PIs to have the same purchasing power that came with a full-modular grant award in 2001.
__
*The BRDPI inflation numbers that I used can be downloaded from the NIH Office of Budget. The 2017 and 2018 numbers are projected.

**I blew it. The BRDPI spreadsheet actually projects inflation out to 2023 and I pulled the number from 2021 projection. The correct FY2018 equivalent is $413,020.

5 responses so far

Stability of funding versus the project-based funding model of the NIH

May 09 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding

In response to a prior post, Morgan Price wonders about the apparent contrast of NIH's recent goal to stabilize research funding and the supposed "project-based" model.

I don't see how stability based funding is consistent with project-based funding and "funding the best science". It would be a radical change...?

NIH grants are supposed to be selected and awarded on the basis of the specific project that is proposed. That is why there is such extensive detailing of a very specific area of science, well specified Specific (not General!) Aims and a listing of specific experiments.

They are not awarded on the basis of a general program of research that seems to be promising for continued funding.

Note that there are indeed mechanisms of funding that operate on the program level to much greater extent. HHMI being one of the more famous ones of these. In program based award, the emphasis is on what the investigating team (and generally this means specifically the PI) has accomplished and published in recent years. There may be some hints about what the person plans to work on next but generally the emphasis is on past performance, rather than the specific nature of the future plan.

In the recent handwringing from NIH about how investigators that they have launched with special consideration for their newcomer status (e.g., the Early Stage Investigator PI applications can be funded at lower priority scores / percentile ranks than would be needed by an established investigator.

if we are going to nurture meritorious, productive mid-career investigators by stabilizing their funding streams, monies will have to come from somewhere.

"Stabilizing", Morgan Price assumes is the same thing as a radical change. It is not.

Here's the trick:

The NIH funding system has always been a hybrid which pays lip service to "project based funding" as a model while blithely using substantial, but variable, input from the "program based" logic. First off, the "Investigator" criterion of proposal review is one of 5 supposedly co-equal major criteria. The Biosketch, which details the past accomplishments and skills of the PI) is prominent in the application. This Biosketch lists both papers and prior research grant support* which inevitably leads to some degree of assessment of how productive the PI was with her prior awards. This then is used to judge the merit of the proposal that is under current review - sounds just a bit like HHMI, doesn't it?

The competing continuation application (called a Renewal application now) is another NIH beast that reveals the hybrid nature of the selection system. You are allowed to ask for no more than 5 years of support for a given project, but you can then ask for successive five year extensions via competitive application review. This type of proposal has a "Progress Report" and a list of papers resulting from the project required within the application. This, quite obviously, focuses the review in large part on the past accomplishment. Now, sure, the application also has to have a detailed proposal for the next interval. Specific Aims. Experiments listed. But it also has all of the prior accomplishments pushed into the center of the review.

So what is the problem? Why are Collins and Lauer proposing to make the NIH grant selection even more based on the research program? Well, times have changed. The figure here is a bit dated by now but I like to keep refreshing your view of it because NIH has this nasty tendency to truncate their graphs to only the past decade or so. The NIH does this to obscure just how good investigators had things in the 80s. That was when established investigators enjoyed success rates north of 40%. For all applications, not just for competing renewals. Many of the people who started their careers in those wonderful days are still very much with us, by the way. This graph shows that within a few years of the end of the doubling, the success rates for established investigators had dropped to about where the new investigators were in the 1980s. Success rates have only continued to get worse but thanks to policies enacted by Zerhouni, the established and new investigator success rates have been almost identical since 2007.
Interestingly, one of the things Zerhouni had to do was to insist that Program change their exception pay behavior. (This graph was recreated from a GAO report [PDF], page down to Page 56, PDF page 60.) It is relevant because it points to yet another way that the NIH system used to prioritize program qualities over the project qualities. POs historically were much more interested in "saving" previously funded, now unfunded, labs than they were in saving not-yet-funded labs.

Now we get to Morgan Price's point about "the best science". Should the NIH system be purely project-based? Can we get the best science one 5 year plan at a time?

I say no. Five years is not enough time to spool up a project of any heft into a well honed and highly productive gig. Successful intervals of 5 year grants depend on what has come before to a very large extent. Often times, adding the next 5 years of funding via Renewal leads to an even more productive time because it leverages what has come before. Stepping back a little bit, gaps in funding can be deadly for a project. A project that has been killed off just as it is getting good is not only not the "best" science it is hindered science. A lack of stability across the NIH system has the effect of making all of its work even more expensive because something headed off in Lab 1 (due to gaps in funding) can only be started up in Lab 2 at a handicap. Sure Lab 2 can leverage published results of Lab 1 but not the unpublished stuff and not all of the various forms of expertise locked up in the Lab 1 staff's heads.

Of course if too much of the NIH allocation goes to sinecure program-based funding to continue long-running research programs, this leads to another kind of inefficiency. The inefficiencies of opportunity cost, stagnation, inflexibility and dead-woodery.

So there is a balance. Which no doubt fails to satisfy most everyone's preferences.

Collins and Lauer propose to do a bit of re-balancing of the program-based versus project-based relationship, particularly when it comes to younger investigators. This is not radical change. It might even be viewed in part as a selective restoration of past realities of grant funded science careers.

__
*In theory the PI's grants are listed on the Biosketch merely to show the PI is capable of leading a project something like the one under review. Correspondingly, it would in theory be okay to just list the most successful ones and leave out the grant awards with under-impressive outcomes. After all, do you have to put in every paper? no. Do you have to put every bit of bad data that you thought might be preliminary data into the app? no. So why do you have to** list all of your grants? This is the program-based aspects of the system at work.

**dude, you have to. this is one of those culture of review things. You will be looked up on RePORTER and woe be to you if you try to hide some project, successful or not, that has active funding within the past three years.

14 responses so far

When Congress boosts the NIH budget midyear weird stuff happens

Apr 19 2018 Published by under NIH, NIH Budgets and Economics, NIH funding

Interesting comment about NIGMS recent solicitation of supplement applications for capital equipment infrastructure.

If true this says some interesting things about whether NIH will ever do anything to reduce churn, increase paylines and generally make things more livable for their extramural workforce.

9 responses so far

Question of the Day

How do you assess whether you are too biased about a professional colleague and/or their work?

In the sense that you would self-elect out of reviewing either their manuscripts for publication or their grant applications.

Does your threshold differ for papers versus grants?

Do you distinguish between antipathy bias and sympathy bias?

8 responses so far

NIH to crack down on violations of confidential peer review

Mar 30 2018 Published by under Fixing the NIH, NIH, NIH funding

Nakamura is quoted in a recent bit in Science by Jeffrey Brainard.

I'll get back to this later but for now consider it an open thread on your experiences. (Please leave off the specific naming unless the event got published somewhere.)

I have twice had other PIs tell me they reviewed my grant. I did not take it as any sort of quid pro quo beyond *maybe* a sort of "I wasn't the dick reviewer" sort of thing. In both cases I barely acknowledged and tried to move along. These were both scientists that I like both professionally and personally so I assume I already have some pro-them bias. Obviously the fact these people occurred on the review roster, and that they have certain expertise, made them top suspects in my mind anyway.

Updated:

“We hope that in the next few months we will have several cases” of violations that can be shared publicly, Nakamura told ScienceInsider. He said these cases are “rare, but it is very important that we make it even more rare.”

Naturally we wish to know how "rare" and what severity of violation he means.

Nakamura said. “There was an attempt to influence the outcome of the review,” he said. The effect on the outcome “was sufficiently ambiguous that we felt it was necessary to redo the reviews.”

Hmmm. "Ambiguous". I mean, if there is ever *any* contact from an applicant PI to a reviewer on the relevant panel it could be viewed as an attempt to influence outcome. Even an invitation to give a seminar or invitation to join a symposium panel proposal could be viewed as currying favor. Since one never knows how an implicit or explicit bias is formed, how would it ever be anything other than ambiguous? But if this is something clearly actionable by the NIH doesn't it imply some harder evidence? A clearer quid pro quo?

Nakamura also described the types of violations of confidentiality NIH has detected. They included “reciprocal favors,” he said, using a term that is generally understood to mean a favor offered by a grant applicant to a reviewer in exchange for a favorable evaluation of their proposal.

I have definitely heard a few third hand reports of this in the past. Backed up by a forwarded email* in at least one case. Wonder if it was one of these type of cases?

Applicants also learned the “initial scores” they received on a proposal, Nakamura said, and the names of the reviewers who had been assigned to their proposal before a review meeting took place.

I can imagine this happening** and it is so obviously wrong, even if it doesn't directly influence the outcome for that given grant. I can, however, see the latter rationale being used as self-excuse. Don't.

Nakamura said. “In the past year there has been an internal decision to pursue more cases and publicize them more.” He would not say what triggered the increased oversight, nor when NIH might release more details.

This is almost, but not quite, an admission that NIH is vaguely aware of a ground current of violations of the confidentiality of review. And that they also are aware that they have not pursued such cases as deeply as they should. So if any of you have ever notified an SRO of a violation and seen no apparent result, perhaps you should be heartened.

oh and one last thing:

In one case, Nakamura said, a scientific review officer—an NIH staff member who helps run a review panel—inappropriately changed the score that peer reviewers had given a proposal.

SROs and Program Officers may also have dirt on their hands. Terrifying prospect for any applicant. And I rush to say that I have always seen both SROs and POs that I have dealt with directly to be upstanding people trying to do their best to ensure fair treatment of grant applications. I may disagree with their approaches and priorities now and again but I've never had reason to suspect real venality. However. Let us not be too naive, eh?

_
*anyone bold enough to put this in email....well I would suspect this is chronic behavior from this person?

**we all want to bench race the process and demystify it for our friends. I can see many entirely well-intentioned reasons someone would want to tell their friend about the score ranges. Maybe even a sentiment that someone should be warned to request certain reviewers be excluded from reviewing their proposals in the future. But..... no. No, no, no. Do not do this.

29 responses so far

Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

Mar 13 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding, Peer Review

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I'd like to see more chit chat on the implicit question from the last post.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

__
*ok, maybe sometimes. but always?

13 responses so far

Agreement among NIH grant reviewers

Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:

We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

emphasis added.

This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. "Aha!" cry the disgruntled applicants, "This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah."

A smaller set of voices expressed perplexed confusion. "Weird", we say, "but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole."

So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.

This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.

In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.

So. The authors claims are bogus. Ridiculously so. They did not "replicate" the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don't have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the "top-[insert favored value of 2-3 times the current paylines]" scoring grants are all worthy of funding and have very little objective space between them.

Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don't understand these people.
__
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018, https://doi.org/10.1073/pnas.1714379115

*And we're not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow

22 responses so far

Older posts »