Archive for the 'NIH' category

Another day, another report on the postdocalypse

As mentioned in Science, a new report from the US Academies of Sciences, Engineering, and Medicine have deduced we have a problem with too many PhDs and not enough of the jobs that they want.

The report responds to many years of warning signs that the U.S. biomedical enterprise may be calcifying in ways that create barriers for the incoming generation of researchers. One of the biggest challenges is the gulf between the growing number of young scientists who are qualified for and interested in becoming academic researchers and the limited number of tenure-track research positions available. Many new Ph.D.s spend long periods in postdoctoral positions with low salaries, inadequate training, and little opportunity for independent research. Many postdocs pursue training experiences expecting that they will later secure an academic position, rather than pursuing a training experience that helps them compete for the range of independent careers available outside of academia, where the majority will be employed. As of 2016, for those researchers who do transition into independent research positions, the average age for securing their first major NIH independent grant is 43 years old, compared to 36 years old in 1980.

No mention (in the executive summary / PR blurb) that the age of first R01 has been essentially unchanged for a decade despite the NIH ESI policy and the invention of the K99 which is limited by years-since-PhD.

No mention of the reason that we have so many postdocs, which is the uncontrolled production of ever more PhDs.

On to the actionable bullet points that interest me.

Work with the National Institutes of Health to increase the number of individuals in staff scientist positions to provide more stable, non-faculty research opportunities for the next generation of researchers. Individuals on a staff scientist track should receive a salary and benefits commensurate with their experience and responsibilities.

This is a recommendation for research institutions but we all need to think about this. The NCI launched the R50 mechanism in 2016 and they have 49 of them on the books at the moment. I had some thoughts on why this is a good idea here and here. The question now, especially for those in the know with cancer research, is whether this R50 is being used to gain stability and independence for the needy awardee or whether it is just further larding up the labs of Very Important Cancer PIs.

Expand existing awards or create new competitive awards for postdoctoral researchers to advance their own independent research and support professional development toward an independent research career. By July 1, 2023, there should be a fivefold increase in the number of individual research fellowship awards and career development awards for postdoctoral researchers granted by NIH.

As we know the number of NIH fellowships has remained relatively fixed relative to the huge escalation of "postdocs" funded on research grant mechanisms. We really don't know the degree to which independent fellowships simply annoint the chosen (population wise) versus aid the most worthy and deserving candidates to stand out. Will quintupling the F32s magically make more faculty slots available? I tend to think not.

As we know, if you really want to grease the skids to faculty appointment the route is the K99/R00 or basically anything that means the prospective hire " comes with money". Work on that, NIH. Quintuple the K99s, not the F32s. And hand out more R03 or R21 or invent up some other R-mechanism that prospective faculty can apply for in place of "mentored" K awards. I just had this brainstorm. R-mechs (any really) that get some cutesy acronym (like B-START) and can be applied for by basically any non-faculty person from anywhere. Catch is, it works like the R00 part of the K99/R00. Only awarded upon successful competition for a faculty job and the offer of a competitive startup.

Ensure that the duration of all R01 research grants supporting early-stage investigators is no less than five years to enable the establishment of resilient independent research programs.

Sure. And invent up some "next award" special treatment for current ESI. and then a third-award one. and so on.

Or, you know, fix the problem for everyone which is that too many mouths at the trough have ruined the cakewalk that experienced investigators had during the eighties.

Phase in a cap – three years suggested – on salary support for all postdoctoral researchers funded by NIH research project grants (RPGs). The phase-in should occur only after NIH undertakes a robust pilot study of sufficient size and duration to assess the feasibility of this policy and provide opportunities to revise it. The pilot study should be coupled to action on the previous recommendation for an increase in individual awards.

This one got the newbie faculty all het up on the twitters.


and


being examples if you are interested.

They are, of course, upset about two things.

First, "the person like me". Which of course is what drives all of our anger about this whole garbage fire of a career situation that has developed. You can call it survivor guilt, self-love, arrogance, whatever. But it is perfectly reasonable that we don't like the Man doing things that mean people just like us would have washed out. So people who were not super stars in 3 years of postdoc'ing are mad.

Second, there's a hint of "don't stop the gravy train just as I passed your damn insurmountable hurdle". If you are newb faculty and read this and get all angree and start telling me how terrible I am.....you need to sit down an introspect a bit, friend. I can wait.

New faculty are almost universally against my suggestion that we all need to do our part and stop training graduate students. Less universally, but still frequently, against the idea that they should start structuring their career plans for a technician-heavy, trainee-light arrangement. With permanent career employees that do not get changed out for new ones every 3-5 years like leased Priuses either.

Our last little stupid poll confirmed that everyone things 3-5 concurrent postdocs is just peachy for even the newest lab and gee whillikers where are they to come from?

Aaaanyway.
This new report will go nowhere, just like all the previous ones that reach essentially the same conclusion and make similar recommendations. Because it is all about the

1) Mouths at the trough.
and
2) Available slops.

We continue to breed more mouths PHDs.

And the American taxpayers, via their duly appointed representatives in Congress, show no interest in radically increasing the budget for slops science.

And even if Congress trebled or quintupled the NIH budget, all evidence suggests we'd just to the same thing all over again. Mint more PhDs like crazee and wonder in another 10-15 years why careers still suck.

63 responses so far

The Alleged Profession of Journalism Sleazy Techniques Strike Again.

Apr 04 2018 Published by under Alleged Profession, NIH

One of the nastiest things that the alleged profession of journalism has been caught doing is photoshopping pictures to engage the prejudices of their readers. Probably the most famous of these was when TIME was caught darkening the appearance of OJ Simpson's mugshot during his murder trial.

In June of 1994, in the midst of OJ Simpson’s murder trial, both TIME magazine and Newsweek featured Simpson’s mugshot on their covers.
...
The two magazines were placed side by side on newsstands and the public immediately saw that TIME’s cover had considerably darkened Simpson’s skin. The photo, representing a case already laced with racial tension, caused massive public outcry.

In this they walk in lockstep with the sorts of sleazy tricks played by political advertising geniuses such as those that tried to play on racial prejudice in opposing President Obama.

Campaign ads have used darker images of Obama to appeal to voters' racial prejudice, a new study has revealed.

Researchers analyzed 126 ads from the campaign in 2008, and found that digital editing had changed the appearances of both Barack Obama and Republican opponent John McCain.

Sometimes they appeared more washed out, but the McCain campaign often used images in which Obama's skin appeared darker when they were attempting to link him with crime.

I was struck by the image used recently on STAT to head an article on the Director of the NIAAA, George Koob*.

Looks kinda sinister to me. The article, by Sharon Begley and Andrew Joseph, is one of a pair (so far) of articles which appear to be accusing Koob of being under the sway of the beverage industry to the extent that it is influencing what grants he approves for funding as NIAAA Director. That's a topic for another post, perhaps, but the issue of today is the sleazy way that the alleged profession of journalism is fully willing to use pictures to create an impression consistent with their accusations. Just the way TIME did with the OJ mugshot. Just the way Republican political operatives did with pictures of President Obama.

The goal is to engage the prejudices of the reader so as to push them down the road to believing the case that you are supposedly making on more objective grounds.

Here's what a quick Google image search has to say about Koob's appearance.
[click to enlarge]

You can compare the distribution of Koob's appearances to the one included in the STAT piece for yourself.

Now, where did STAT get the image? STAT credits it to themselves as an "illustration" and it looks sourced from an AP credited photo from this article in japantimes.com. So yes, presumably their art department combed the web to find the picture that they wanted to use, selecting it from among all the available pictures of their subject, and then pshopped it into this "illustration".

Point being that they chose this particular image out of many. It's intentional.
_
*Disclaimer: I've been professionally acquainted with Koob since about 1991, at times fairly well-acquainted. I've heard him hold forth on the problems of alcohol and other substance misuse/dependence/addiction numerous times and have read a fair number of his reviews. I find him to be a pretty good guy, overall, with a keen intent to reduce the suffering associated with alcoholism and other substance dependencies. These recent accusations that he is somehow under the sway of the beverage industry strike me as really discordant with my experience of him over the past 27 years. Take my comments on this topic with that in mind.

25 responses so far

Question of the Day

How do you assess whether you are too biased about a professional colleague and/or their work?

In the sense that you would self-elect out of reviewing either their manuscripts for publication or their grant applications.

Does your threshold differ for papers versus grants?

Do you distinguish between antipathy bias and sympathy bias?

8 responses so far

NIH to crack down on violations of confidential peer review

Mar 30 2018 Published by under Fixing the NIH, NIH, NIH funding

Nakamura is quoted in a recent bit in Science by Jeffrey Brainard.

I'll get back to this later but for now consider it an open thread on your experiences. (Please leave off the specific naming unless the event got published somewhere.)

I have twice had other PIs tell me they reviewed my grant. I did not take it as any sort of quid pro quo beyond *maybe* a sort of "I wasn't the dick reviewer" sort of thing. In both cases I barely acknowledged and tried to move along. These were both scientists that I like both professionally and personally so I assume I already have some pro-them bias. Obviously the fact these people occurred on the review roster, and that they have certain expertise, made them top suspects in my mind anyway.

Updated:

“We hope that in the next few months we will have several cases” of violations that can be shared publicly, Nakamura told ScienceInsider. He said these cases are “rare, but it is very important that we make it even more rare.”

Naturally we wish to know how "rare" and what severity of violation he means.

Nakamura said. “There was an attempt to influence the outcome of the review,” he said. The effect on the outcome “was sufficiently ambiguous that we felt it was necessary to redo the reviews.”

Hmmm. "Ambiguous". I mean, if there is ever *any* contact from an applicant PI to a reviewer on the relevant panel it could be viewed as an attempt to influence outcome. Even an invitation to give a seminar or invitation to join a symposium panel proposal could be viewed as currying favor. Since one never knows how an implicit or explicit bias is formed, how would it ever be anything other than ambiguous? But if this is something clearly actionable by the NIH doesn't it imply some harder evidence? A clearer quid pro quo?

Nakamura also described the types of violations of confidentiality NIH has detected. They included “reciprocal favors,” he said, using a term that is generally understood to mean a favor offered by a grant applicant to a reviewer in exchange for a favorable evaluation of their proposal.

I have definitely heard a few third hand reports of this in the past. Backed up by a forwarded email* in at least one case. Wonder if it was one of these type of cases?

Applicants also learned the “initial scores” they received on a proposal, Nakamura said, and the names of the reviewers who had been assigned to their proposal before a review meeting took place.

I can imagine this happening** and it is so obviously wrong, even if it doesn't directly influence the outcome for that given grant. I can, however, see the latter rationale being used as self-excuse. Don't.

Nakamura said. “In the past year there has been an internal decision to pursue more cases and publicize them more.” He would not say what triggered the increased oversight, nor when NIH might release more details.

This is almost, but not quite, an admission that NIH is vaguely aware of a ground current of violations of the confidentiality of review. And that they also are aware that they have not pursued such cases as deeply as they should. So if any of you have ever notified an SRO of a violation and seen no apparent result, perhaps you should be heartened.

oh and one last thing:

In one case, Nakamura said, a scientific review officer—an NIH staff member who helps run a review panel—inappropriately changed the score that peer reviewers had given a proposal.

SROs and Program Officers may also have dirt on their hands. Terrifying prospect for any applicant. And I rush to say that I have always seen both SROs and POs that I have dealt with directly to be upstanding people trying to do their best to ensure fair treatment of grant applications. I may disagree with their approaches and priorities now and again but I've never had reason to suspect real venality. However. Let us not be too naive, eh?

_
*anyone bold enough to put this in email....well I would suspect this is chronic behavior from this person?

**we all want to bench race the process and demystify it for our friends. I can see many entirely well-intentioned reasons someone would want to tell their friend about the score ranges. Maybe even a sentiment that someone should be warned to request certain reviewers be excluded from reviewing their proposals in the future. But..... no. No, no, no. Do not do this.

29 responses so far

Delay, delay, delay

I'm not in favor of policies that extend the training intervals. Pub requirements for grad students is a prime example. The "need" to do two 3-5 year postdocs to be competitive. These are mostly problems made by the Professortariat directly.

But NIH has slipped into this game. Postdocs "have" to get evidence of funding, with F32 NRSAs and above all else the K99 featuring as top plums.

Unsurprisingly the competition has become fierce for these awards. And as with R-mechs this turns into the traffic pattern queue of revision rounds. Eighteen months from first submission to award if you are lucky.

Then we have the occasional NIH Institute which adds additional delaying tactics. "Well, we might fund your training award next round, kid. Give it another six months of fingernail biting."

We had a recent case on the twttrs where a hugely promising young researcher gave up on this waiting game, took a job in home country only to get notice that the K99 would fund. Too late! We (MAGA) lost them.

I want NIH to adopt a "one and done" policy for all training mechanisms. If you get out-competed for one, move along to the next stage.

This will decrease the inhumane waiting game. It will hopefully open up other opportunities (transition to quasi-faculty positions that allow R-mech or foundation applications) faster. And overall speed progress through the stages, yes even to the realization that an alternate path is the right path.

29 responses so far

Variability in NIH Grant review is a good feature, not a bug, not a terrible indictment of the system

Mar 13 2018 Published by under Fixing the NIH, NIH, NIH Careerism, NIH funding, Peer Review

You may see more dead horse flogging than usual folks. Commentariat is not as vigorous as I might like yet.

This emphasizes something I had to say about the Pier monstrosity purporting to study the reliability of NIH grant review.
Terry McGlynnsays:

Absolutely. We do not want 100% fidelity the evaluation of grant "merit". If we did that, and review was approximately statistically representative of the funded population, we would all end up working on cancer in the end.

Instead, we have 28 I or Cs. These are broken into Divisions that have fairly distinct missions. There are Branches within the Divisions and multiple POs who may have differing viewpoints. CSR fields a plethora of study sections, many of which have partially overlapping missions. Meaning a grant could be reviewed in one of several different sections. A standing section might easily have 20-30 reviewers per meeting and you grant might reasonably be assigned to several different permutations of three for primary assessment. Add to this the fact that reviewers change over time within a study section, even across rounds to which you are submitting approximately the same proposal. There should be no wonder whatsoever that review outcome for a given grant might vary a bit under differing review panels.

Do you really want perfect fidelity?

Do you really want that 50% triage and another 30-40% scored-outside-the-payline to be your unchangeable fate?

Of course not.

You want the variability in NIH Grant review to work in your favor.

If a set of reviewers finds your proposal unmeritorious do you give up* and start a whole 'nother research program? Eventually to quit your job and do something else when you don't get funded after the first 5 or 10 tries?

Of course not. You conclude that the variability in the system went against you this time, and come back for another try. Hoping that the variability in the system swings your way.

Anyway, I'd like to see more chit chat on the implicit question from the last post.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

Well? How much reliability in the system do you want, Dear Reader?

__
*ok, maybe sometimes. but always?

13 responses so far

CSR and critics need to get together on their grant review studies

Mar 12 2018 Published by under Fixing the NIH, NIH, Peer Review

I was critical of a recent study purporting to show that NIH grant review is totally random because of structural flaws that could not have been designed more precisely to reach a foregone conclusion.

I am also critical of CSR/NIH self-studies. These are harder to track because they are not always published or well promoted. We often only get wind of them when people we know are invited to participate as reviewers. Often the results are not returned to the participants or are returned with an explicit swearing to secrecy.

I've done a couple of these self-study reviews for CSR.

I am not impressed by their designs either. Believe me.

As far as I've heard or experienced, most (all) of these CSR studies have the same honking flaw of restricted range. Funded applications only.

Along with other obscure design choices that seem to miss the main point*. One review pitted apps funded from closely-related sections against each other. ....except "closely-related" did not appear that close to me. It was more a test of whatever historical accident made CSR cluster those study sections or perhaps a test of mission drift. A better way would have been to cluster study sections to which the same PI submits. Or by assigned PO maybe? By a better key word cluster analysis?

Anyway, the CSR designs are usually weird when I hear about them. They never want to convene multiple panels of very similar reviewers to review the exact same pile of apps in real time. Reporting on their self-studies is spotty at best.

This appears to my eye to be an attempt to service a single political goal. I.e. "Maintain the ability to pretend to Congress that grant review selects only the most meritorious applications for funding with perfect fidelity".

Th critics, as we've seen, do the opposite. Their designs are manipulated to provide a high probability of showing NIH grant review is utterly unreliable and needs to be dismantled and replaced.

Maybe the truth lies somewhere in the middle? And if these forces would combine to perform some better research we could perhaps better trust jointly proposed solutions.

__

*I include the "productivity" data mining. NIH also pulls some sketchy stuff with these studies. Juking it carefully to support their a priori plans, rather than doing the study first and changing policy after.

3 responses so far

Agreement among NIH grant reviewers

Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:

We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

emphasis added.

This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. "Aha!" cry the disgruntled applicants, "This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah."

A smaller set of voices expressed perplexed confusion. "Weird", we say, "but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole."

So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.

This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.

In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.

So. The authors claims are bogus. Ridiculously so. They did not "replicate" the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don't have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the "top-[insert favored value of 2-3 times the current paylines]" scoring grants are all worthy of funding and have very little objective space between them.

Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don't understand these people.
__
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018, https://doi.org/10.1073/pnas.1714379115

*And we're not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow

22 responses so far

What does it mean if a miserly PI won't pay for prospective postdoc visits?

Feb 20 2018 Published by under Careerism, NIH Careerism

It is indubitably better for the postdoctoral training stint if the prospective candidate visits the laboratory before either side commits. The prospective gets a chance to see the physical resources, gets a chance for very specific and focused time with the PI and above all else, gets a chance to chat with the lab's members.

The PI gets a better opportunity to suss out strengths and weaknesses of the candidate, as do the existing lab members. Sometimes the latter can sniff things out that the prospective candidate does not express in the presence of the PI.

These are all good things and if you prospective trainees are able to visit a prospective training lab it is wise to take advantage.

If memory serves the triggering twittscussion for this post started with the issue of delayed reimbursement of travel and the difficulty some trainees have in floating expenses of such travel until the University manages to cut a reimbursement check. This is absolutely an important issue, but it is not my topic for today.

The discussion quickly went in another direction, i.e. if it is meaningful to the trainee if the PI "won't pay for the prospective to visit". The implication being that if a PI "won't" fly you out for a visit to the laboratory, this is a bad sign for the future training experience and of course all prospectives should strike that PI off their list.

This perspective was expressed by both established faculty and apparent trainees so it has currency in many stages of the training process from trainee to trainer.

It is underinformed.

I put "won't" in quotes above for a reason.

In many situations the PI simply cannot pay for travel visits for recruiting postdocs.

They may appear to be heavily larded with NIH research grants and still do not have the ability to pay for visits. This is, in the experience of me and others chiming in on the Twitts, because our institutional grants management folks tell us it is against the NIH rules. There emerged some debate about whether this is true or whether said bean counters are making an excuse for their own internal rulemaking. But for the main issue today, this is beside the point.

Some PIs cannot pay for recruitment travel from their NIH R01(s).

Not "won't". Cannot. Now as to whether this is meaningful for the training environment, the prospective candidate will have to decide for herself. But this is some fourth level stuff, IMO. PIs who have grants management which works at every turn to free them from rules are probably happier than those that have local institutional policies that frustrate them. And as I said at the top, it is better, all else equal, when postdocs can be consistently recruited with laboratory visits. But is the nature of the institutional interpretation of NIH spending rules a large factor against the offerings of the scientific training in that lab? I would think it is a very minor part of the puzzle.

There is another category of "cannot" which applies semi-independently of the NIH rule interpretation- the PI may simply not have the cash. Due to lack of a grant or lack of a non-Federal pot of funds, the PI may be unable to spend in the recruiting category even if other PIs at the institution can do so. Are these meaningful to the prospective? Well the lack of a grant should be. I think most prospectives that seek advice about finding a lab will be told to check into the research funding. It is kind of critical that there be enough for whatever the trainee wants to accomplish. The issue of slush funds is a bit more subtle but sure, it matters. A PI with grants and copious slush fundes may offer a better resourced training environment. Trouble is, that this comes with other correlated factors of importance. Bigger lab, more important jet-setting PI...these are going to be more likely to have extra resources. So it comes back to the usual trade-offs and considerations. In the face of that it is unclear that the ability to pay for recruiting is a deciding factor. It is already correlated with other considerations the prospective is wrestling with.

Finally we get to actual "will not". There are going to be situations where the PI has the ability to pay for the visit but chooses not to. Perhaps she has a policy never to do so. Perhaps he only pays for the top candidates because they are so desired. Perhaps she does this for candidates when there are no postdocs in the lab but not when there are three already on board. Or perhaps he doesn't do it anymore because the last three visitors failed to join the lab*.

Are those bad reasons? Are they reasons that tell the prospective postdoc anything about the quality of the future training interaction?

__
*Extra credit: Is it meaningful if the prospective postdoc realizes that she is fourth in line, only having been invited to join the lab after three other people passed on the opportunity?

4 responses so far

Should NIH provide a transcript of the discussion of grants?

Feb 16 2018 Published by under Grant Review, NIH funding

Respected neuroscientist Bita Moghaddam seems to think this would be a good idea.

She then goes on to mention the fact that POs listen in on grant discussion, can take notes and can give the PI a better summary of the discussion that emerges in the Resume of Discussion written by the SRO.

This variability in PO behavior then leads to some variability in the information communicated to the PI. I've had one experience where a PO gave me such chapter and verse on the discussion that it might have been slightly over the line (pre- and post-discussion scores). Maybe two other ones where the PO gave me a very substantial run down. But for the most part POs have not been all that helpful- either they didn't attend or they didn't pay attention that closely or they just didn't care to tell me anything past the "we suggest you revise and resubmit" mantra. She has a good point that it is not ideal that there is so much variability. When I've touched on this issue in the past, I've suggested this is a reason to cultivate as many POs as possible in your grant writing so that you have a chance of getting the "good" ones now and again. Would providing the transcript of discussion help? Maybe?

Or maybe we could just start lobbying the ICs of our fondest acquaintance to take the effort to make the POs behave more consistently.

But I have two problems with Professor Moghaddam's proposals. First of course, is the quashing effect that de-anonymizing (and while a transcript could still be anonymized it is in the same vein of making reviewers hesitate to speak up) may have on honest and open comment. The second problem is that it goes into reinforcing the idea that properly revising a grant application is merely "doing what they said to do". Which then should, the thinking goes, make the grant fundable next time.

This is, as you know, not the way the system is set to work and is a gut-feeling behavior of reviewers that the CSR works hard to counter. I don't know if having the transcript would help or hurt in this regard. I guess it would depend on the mindset of the PI when reading the transcript. If they were looking to merely suss out* the relative ratio of seriousness of various critiques perhaps this would be fine?

__
*My fear is that this would just feed the people who are looking to litigate their review to "prove" that they got screwed and deserve funding.

20 responses so far

« Newer posts Older posts »