Drugmonkey http://drugmonkey.scientopia.org Grants, research and drugs Mon, 08 Feb 2016 16:10:13 +0000 en-US hourly 1 Nine http://drugmonkey.scientopia.org/2016/02/08/nine/ http://drugmonkey.scientopia.org/2016/02/08/nine/#comments Mon, 08 Feb 2016 16:07:02 +0000 http://drugmonkey.scientopia.org/?p=8649 Nine years.

Nine years ago my dismay at the way certain Ecstasy and pot enthusiasts conducted misinformation campaigns online, and dismay over certain realities of the scientific career arc reached a threshold.

I had been reading science blogs and, particularly, several ScienceBlogs, so the outlet immediately presented itself.

Much spleen has been vented and my sanity kept near the critical line.

I've read comments from people that I would have never known, still don't beyond the confines of this blog in many cases* and learned a great deal as a consequence.

I've gotten to know people in my field that I would have known only at a handshake level. I've gotten to know some fantastic people in other fields or walks of life that I would have never run across.

In short, it has been a lot of fun writing this blog over the past nine years.

I can quit anytime I want.
*as recently as the last few months I've had a long term blog commenter out self to me and I was shocked to discover it wasn't a woman like I thought.

http://drugmonkey.scientopia.org/2016/02/08/nine/feed/ 19
Amgen continues their cherry picking on "reproducibility" agenda http://drugmonkey.scientopia.org/2016/02/05/amgen-continues-their-cherry-picking-on-reproducibility-agenda/ http://drugmonkey.scientopia.org/2016/02/05/amgen-continues-their-cherry-picking-on-reproducibility-agenda/#comments Fri, 05 Feb 2016 18:16:50 +0000 http://drugmonkey.scientopia.org/?p=8645 A report by Begley and Ellis, published in 2012, was hugely influential in fueling current interest and dismay about the lack of reproducibility in research. In their original report the authors claimed that the scientists of Amgen had been unable to replicate 47 of 53 studies.

Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies (see 'Reproducibility of research findings'). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.

Despite the limitations identified by the authors themselves, this report has taken on a life of truthy citation as if most of all biomedical science reports cannot be replicated.

I have remarked a time or two that this is ridiculous on the grounds the authors themselves recognize, i.e., a company trying to skim the very latest and greatest results for intellectual property and drug development purposes is not reflective of how science works. Also on the grounds that until we know exactly which studies and what they mean by "failed to replicate" and how hard they worked at it, there is no point in treating this as an actual result.

At first, the authors refused to say which studies or results were meant by this original population of 53.

Now we have the data! They have reported their findings! Nature announces breathlessly that Biotech giant publishes failures to confirm high-profile science.

Awesome. Right?

Well, they published three of them, anyway. Three. Out of fifty-three alleged attempts.

Are you freaking kidding me Nature? And you promote this like we're all cool now? We can trust their original allegation of 47/53 studies unreplicable?


Christ what a disaster.

I look forward to hearing from experts in the respective fields these three papers inhabit. I want to know how surprising it is to them that these forms of replication failure occurred. I want to know the quality of the replication attempts and the nature of the "failure"- was it actually failure or was it a failure to generalize in the way that would be necessary for a drug company's goals? Etc.

Oh and Amgen? I want to see the remaining 50 attempts, including the positive replications.

Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012 Mar 28;483(7391):531-3. doi: 10.1038/483531a.

http://drugmonkey.scientopia.org/2016/02/05/amgen-continues-their-cherry-picking-on-reproducibility-agenda/feed/ 19
Thought of the Day http://drugmonkey.scientopia.org/2016/02/04/thought-of-the-day-40/ http://drugmonkey.scientopia.org/2016/02/04/thought-of-the-day-40/#comments Thu, 04 Feb 2016 21:24:21 +0000 http://drugmonkey.scientopia.org/?p=8643 We always seem to go through a run of manuscript rejections and then a run of accepts. I wish we could figure out how to spread these out a little bit more. Would alternation be too much trouble, world?

http://drugmonkey.scientopia.org/2016/02/04/thought-of-the-day-40/feed/ 3
Appealing manuscript rejections http://drugmonkey.scientopia.org/2016/02/04/appealing-manuscript-rejections/ http://drugmonkey.scientopia.org/2016/02/04/appealing-manuscript-rejections/#comments Thu, 04 Feb 2016 18:26:45 +0000 http://drugmonkey.scientopia.org/?p=8641 This is going to be another one of those posts where we mix up what should be so, what is so and what is best for the individual scientist's career.

Our good blog friend iBAM was musing about reading a self-empowerment book.

The career prescription here...I agree with.

It took me quite awhile to realize that you can appeal when a journal editor rejects your manuscript. I thought a reject was a reject when I first started in this business. It certainly always read like one.

It turns out that you can appeal that decision.

And you should. The process does not end with the initial decision to reject your manuscript.

So what you do is, you email the Editor and you outline why the decision was a mistake, what you can do to revise the manuscript to address the major concerns, why your paper would be a fit for the journal, etc.

You may be ignored. You may be told "Nope, it is still a rejection".

OR. You may be given the opportunity to resubmit a revised version of the manuscript as if it were a new one. Which of course you could do anyway but it would be likely to get hammered down if you have not already solicited the invitation to do so from the Editor.

So you will see here that with a simple email, you have given yourself another chance to get the paper accepted. So do it.

After this we get into the fun part.

Should you appeal each and every decision? I think my natural stance is no, you don't want to get a rep as a chronic whiner. But you know what? There are probably people in science who rebut and complain about every decision. Does it work for them? I dunno. And by extension to sports, you know that phenomenon of working the ref about bad calls, hoping to get a makeup call later? That logic maybe applies here.

I do know there are plenty of testimonials like those of Kaye Tye that complaining about a rejection ended up with a published manuscript in the initially-rejecting journal.

Do appeals work the same for everyone? That is, given the same approximate merits of the case is the Editor going to respond similarly to appeals from anyone? I can't say. This is a human decision making business and I have always believed that when an editor knows you personally, they can't help but treat your submissions a little better. Similarly when an editor knows your work, your pedigree, your department, etc it probably helps your appeal gain traction.

But who knows? Maybe what helps is having a good argument on the merits. Maybe what helps is that the Editor liked the paper and slightly disagrees with the review outcome themselves.

HAHAHA, I crack myself up.

Okay so where do we end up?

First, you need to add the post-rejection appeal into your repertoire of strategies if you don't already include it. I would use it judiciously, personally, but this is something to ask your subfield colleagues about. People who publish in the journals you are targeting. Ask them how often they appeal.

Second, realize that the appeal game is going to add up over time in a person's career. If there are little personal biases on the part of Editors (which there are) then this factor is going to further amplify the insider club advantage. And when you sit and wonder why that series of papers that are no better than your own just happen to end up in that higher rank of journal well.....there may be reasons other than merit. What you choose to do with that information is up to you. I see a few options.

  • Pout.
  • Up your game so that you don't need to use those little bennies to get over the hurdle.
  • Schmooze journal editors as hard as you can.
  • Ignore the whole thing and keep doing your science the way you want and publishing in lower journals than your work deserves.

*not a fan of this myself but some people (including one of my kids' soccer coaches) appear to believe very strongly in this theory.

http://drugmonkey.scientopia.org/2016/02/04/appealing-manuscript-rejections/feed/ 25
Look Somewhere Else for Romance http://drugmonkey.scientopia.org/2016/02/03/look-somewhere-else-for-romance/ http://drugmonkey.scientopia.org/2016/02/03/look-somewhere-else-for-romance/#comments Thu, 04 Feb 2016 02:21:02 +0000 http://drugmonkey.scientopia.org/?p=8631 Sigh.

Once again, my friends, a professor of science has been found to be harassing women underlings.

Actually it was a bit more:

Lieb allegedly made unwelcome sexual advances to several female graduate students on an off-campus retreat in Galena, Ill., and engaged in sexual activity with a student who was “incapacitated due to alcohol and therefore could not consent,” according to documents acquired by the New York Times.

Yeah, that last part there pretty much makes Jason Lieb a rapist.

Then it turns out that the guy had left Princeton rather abruptly:

Yoav Gilad, a molecular biologist at Chicago who was on the committee that advocated hiring Dr. Lieb, said he and his fellow faculty members knew that in February 2014 Dr. Lieb had abruptly resigned from Princeton University, just seven months after having been recruited from the University of North Carolina to run a high-profile genomics institute.

then it gets very foggy:

molecular biologists on the University of Chicago faculty and at other academic institutions received emails from an anonymous address stating that Dr. Lieb had faced allegations of sexual harassment or misconduct at previous jobs at Princeton and the University of North Carolina.


But Dr. Gilad said that when it was contacted, Princeton said there had been no sexual harassment investigation of Dr. Lieb while he was there. He said efforts to find out more about what prompted Dr. Lieb’s departure proved fruitless. A Princeton spokeswoman said the university does not comment on personnel matters.

hmmm. smells a little bit, doesn't it? But no PROOF. Because, no doubt, of the usual. Murky circumstances. Accusations that can't be proved easily. Wagon circling from the institution and lingering doubt that the accuser is telling the truth. We've seen it a million times.

Separately, Dr. Gilad acknowledged, during the interviews of Dr. Lieb, he admitted that he had had a monthslong affair with a graduate student in his laboratory at the University of North Carolina.

Ok, wait, whoa full stop. The dude had an affair with a graduate student IN HIS LAB!

Done. Right there. The vast majority of Universities that have policies at least say you can't have an affair with a student you have a direct supervisory role over.

Hiring committees are not courts of law and applicants do not have a right to be hired. This committee at U of C should have taken a pass as soon as they learned Lieb was screwing his graduate student.

There are a number of problems that we academics need to confront about this.

First, the guy raped an incapacitated grad student at a dept retreat. This has to put some courage into departments to lay down some rules during their retreats. Like maybe, no faculty partying with grad students after official hours, when the other faculty aren't around. Open the retreat with a discussion of harassment, respect and professional behavior like they do at GRCs now. That sort of thing.

Second, what in the hell do we do about these unproven cases in which the guy (it's almost always a man) who keeps jumping institutions leaves a smell of harassment and bad behavior behind him that hasn't been proven or documented?

It's weird, right?

If you take a rec letter from a trusted colleague about a prospective student or postdoc that has the slightest hint of a problem, professional work wise, you take an automatic pass. You move on to the next candidate. Nobody talks about lawyers and proof and how you "have" to hire this particular postdoc or they will sue you for defamation. Yet when it comes to a faculty hire, the stench of misconduct is treated differently. "well, it hasn't been proven! there's no paper trail! Sure he left in a hurry and the old institution ain't talking but its a coincidence! and we can't listen to these rumors from eight of his previous trainees who all tell the same tale, hearsay! we'll be sued for defamation if we choose not to hire him!".

Something is very wrong here.

We're perfectly okay not hiring a candidate because we suspect they won't like our town and will be soon looking to leave. Ok with violating HR rules to sniff around about a two-body problem and refuse to offer a faculty job to such a problem candidate. Underrepresented minorities? Don't even get me started. Women of childbearing age or with a young child? yeah. Our hiring committees do all kinds of inferring and gossiping and not-offering on the basis of suspected factors. Thinly evidenced. Not proven. Actually illegal reasons in some cases.

But when someone is rumoured to be a harasser of women? Geez, we have to bend over backwards to extend him his alleged right to the job.

Something is very wrong here.

People of science? Please. Just. Look. Somewhere. Else. Would. You? Please? Find your romantic entanglements outside of the workplace. Really. It cannot possibly be this difficult.

http://drugmonkey.scientopia.org/2016/02/03/look-somewhere-else-for-romance/feed/ 88
Thought of the Day http://drugmonkey.scientopia.org/2016/02/03/thought-of-the-day-39/ http://drugmonkey.scientopia.org/2016/02/03/thought-of-the-day-39/#comments Wed, 03 Feb 2016 18:43:40 +0000 http://drugmonkey.scientopia.org/?p=8626 Dear Editor of Journal,

I find it interesting to review the manuscripts of ours that you have rejected on impact and quality grounds* over the past several years. We quite naturally found publication homes elsewhere for these manuscripts, this is how the system works. No harm, no foul. In none of these cases, I will note, was the manuscript radically changed in a way that would fundamentally alter the review of quality or impact as reflected in your reviewer's comments. Yet I note that these papers have been cited in excess, sometimes far in excess, of your Journal's Impact Factor. Given what we know about the skew in citations distributions which contribute to a JIF, well, this positions our papers quite favorably within the distribution of manuscripts you chose to accept.

This suggests to me there is something very wrong with your review process insofar as it attempts to evaluate quality and predict impact.


*journal fit is another matter entirely. I am not talking about those complaints.

http://drugmonkey.scientopia.org/2016/02/03/thought-of-the-day-39/feed/ 10
Managing your CV and why pre-print waccaloons should be ignored http://drugmonkey.scientopia.org/2016/02/02/managing-your-cv-and-why-pre-print-waccaloons-should-be-ignored/ http://drugmonkey.scientopia.org/2016/02/02/managing-your-cv-and-why-pre-print-waccaloons-should-be-ignored/#comments Tue, 02 Feb 2016 23:26:33 +0000 http://drugmonkey.scientopia.org/?p=8622 For whatever reasons I was thinking of at the time I was motivated to twttr this:

What I mean by this is that somewhere on the pile of motivations you have to finish that manuscript off and get it submitted, you should have "keeping your calendar years populated".

It may not always be a big deal and it certainly pales in comparison to JIF factors, but all else equal you want to maintain a consistent rate of output of papers. Because eventually someone will look at that in either an approving or disapproving way, depending on how your publication record looks to them.

Like it or not, one way that people will consider your consistency over the long haul is by looking at how many papers you have published in each calendar year. Published. Meaning assigned to a print issue that is dated in a particular calendar year. You cannot go back and fill these in when you notice you have let a gap develop.

If you can avoid gaps*, do so. This means that you have to have a little bit of knowledge about the typical timeline from submission of your manuscript for the first time through until the hard publication date is determined. This will vary tremendously from journal to journal and from case to case because you don't know specifically how many times you are going to have to revise and resubmit.

But you should develop some rough notion of the timeline for your typical journals. Some have long pre-print queues. Some have short ones. Some move rapidly from acceptance to print issue. Some take 18 mo or more. Some journals have priority systems for their pre-print queue and some just go in strict chronological order.

And in this context, you need to realize something very simple and clear. Published is published.

Yes, mmhmm, very nice. Pre-print archives are going to save us all. Well, this nonsense does nothing for the retrospective review of your CV for publication consistency. At present the culture of scientific career evaluation in the biomedical disciplines does not pay attention to pre-print archives. It doesn't really even respect the date of first appearance online in a pre-publication journal queue. If your work goes up in 2016 but never makes it to a print article until 2017, history will cite it as 2017.

*Obviously it happens sometimes. We can't always dictate the pace of everything in terms of results, funding, trainee life-cycles, personal circumstances and whatnot. I'm just saying you should try to keep as consistent as possible. Keep the gaps as short as possible and try to look like you are compensating. An unusually high number of pubs following a gap year goes a long way, for example.

http://drugmonkey.scientopia.org/2016/02/02/managing-your-cv-and-why-pre-print-waccaloons-should-be-ignored/feed/ 17
CSR Head Nakamura Makes Bizarre Pronouncement http://drugmonkey.scientopia.org/2016/02/02/csr-head-nakamura-makes-bizarre-pronouncement/ http://drugmonkey.scientopia.org/2016/02/02/csr-head-nakamura-makes-bizarre-pronouncement/#comments Tue, 02 Feb 2016 22:55:41 +0000 http://drugmonkey.scientopia.org/?p=8619 An email from the CSR of the NIH hit late yesterday a few days ago, pointing to a number of their Peer Review Notes including one on the budget bump that we are about to enjoy.
Actually that should be "some selected few of us will enjoy" because

“While $2 billion is a big increase, it is less than a 10 percent increase, and a large portion of it is set aside for specific areas and initiatives,” said Dr. Nakamura. “Competition for funding is still going to be intense, and paylines will not return to historic averages . . .

Yeah, as suspected, that money is already accounted for.

The part that has me fired up is the continuation after that ellipsis and a continuing header item.

So make sure you put your best effort into your application before you apply.”

Counterproductive Efforts
“We know some research deans have quotas and force their PIs to submit applications regularly,” said Dr. Nakamura. “It’s important for them to know that university submission rates are not correlated with grant funding. Therefore, PIs should be encouraged to develop and submit applications as their research and ideas justify the effort to write them and have other scientists review them.”

As usual I do not know if this is coming from ignorance or calculated strategy to make their numbers look better. I fear both possibilities. I'm going from memory here because I can't seem to rapidly find the related blog post or data analysis but I think I recall an illustration that University-total grant submission rates did not predict University-total success rates.

At a very basic level Nakamura is using the lie of the truncated distribution. If you don't submit any grant applications, your success rate is going to be zero. I'm sure he's excluding those because seemingly that would make a nice correlation.

But more importantly, he is trying to use university-wide measures to convince the individual PI what is best for her to do.

Wrong. Wrong. Wrong.

Not everyone's chances at that institution are the same. The more established investigators will probably, on average, enjoy a higher success rate. They can therefore submit fewer applications. Lesser folk enjoy lower success rates so therefore they have to keep pounding out the apps to get their grants.

By extension, it takes very little imagination to understand that depending on your ratio of big important established scientists to noobs, and based somewhat on subfields, the apparent University-wide numbers are going to swamp out the information that is needed for each individual PI.

In short, this is just another version of the advice to young faculty to "write better grants, just like the greybeards do".

The trick is, the greybeards DO NOT WRITE BETTER GRANTS! I mean sure, yes, there is a small experience factor there. But the major driver is not the objective quality but rather the established track record of the big-deal scientist. This gives them little benefits of the doubt all over the place as we have discussed on this blog endlessly.

I believe I have yet to hear from a new-comer to NIH grant review that has not had the experience within 1-2 rounds of a reviewer ending his/her review of a clearly lower-quality grant proposal with "....but it's Dr. BigShot and we know she does great work and can pull this off". Or similar.

I have been on a study section round or two in my day and I am here to tell you. My experience is not at all consistent with the idea that the "best" grants win out. Merit scores are not a perfect function of objective grant quality at all. Imperfectly prepared or boring grants get funded all the time. Really exciting and nearly-perfect grants get unfundable scores or triaged. Frequently.

This is because grant review hinges on the excitement of the assigned reviewers for the essence of the project. All else is detail.

You cannot beat this system by writing a "perfect" grant. Because it may not be perfect for all three reviewers no matter how well it has been prepared and how well vetted by whatever colleagues you have rounded up to advise you.

Nakamura should know this. He probably does. Which makes his "advice" a cynical ploy to decrease submissions so that his success rate will look better.

One caveat: I could simply be out of touch with all of these alleged Dean-motivated crap apps. It is true that I have occasionally seen people throw up grant applications that really aren't very credible from my perspective. They are very rare. And it has occasionally been the case that at least one other reviewer liked something about an application I thought was embarrassingly crappy. So go figure.

I also understand that there are indeed Deans or Chairs that encourage high submission rates and maybe this leads to PIs writing garbage now and again. But this does not account for the dismal success rates we are enjoying. I bet that magically disappearing all apps that a PI submitted to meet institutional vigor requirements (but didn't really mean to make a serious play for an award) would have no perceptible effect on success rates for the rest of us. I just haven't ever seen enough non-credible apps for this to make a difference. Perhaps you have another experience on study section, DearReaders?

Finally, I really hate this blame-the-victim attitude on the part of the CSR and indeed many POs. There are readily apparent and demonstrable problems with how some categories of PIs' grants are reviewed. Newer and less experienced applicants. African-American PIs. Women. Perhaps, although this is less well-explicated lately, those from the wrong Universities.

For the NIH to avoid fixing their own problems with review (for example the vicious cycle of study sections punishing ESI apps with ever-worsening scores when the NIH used special paylines to boost success rates) and then blame victims of these problems by suggesting they must be writing bad grants takes chutzpah. But it is wrong. And demoralizing to so many who are taking it on the chin in the grant review game.

And it makes the problems worse. How so? Well, as you know, Dear Reader I am firmly convinced that the only way to succeed in the long term is to keep rolling the reviewer dice, hoping to get three individuals who really get what you are proposing. And to take advantage of the various little features of the system that respond to frequent submissions (reviewer sympathy, PO interest, extra end of year money, ARRA, sudden IC initiatives/directions, etc). Always, always you have to send in credible proposals. But perfect vs really good guarantees you nothing. And when perfect keeps you from submitting another really good grant? You are not helping your chances. So for Nakamura to tell people to sacrifice the really good for the perfect he is worsening their chances. Particularly when the people are in those groups who are already at a disadvantage and need to work even harder* to make up for it.

*Remember, Ginther showed that African-American PIs had to submit more revisions to get funded.

http://drugmonkey.scientopia.org/2016/02/02/csr-head-nakamura-makes-bizarre-pronouncement/feed/ 24
Your Grant in Review: Competing Continuation, aka Renewal, Apps http://drugmonkey.scientopia.org/2016/01/28/your-grant-in-review-competing-continuation-aka-renewal-apps/ http://drugmonkey.scientopia.org/2016/01/28/your-grant-in-review-competing-continuation-aka-renewal-apps/#comments Thu, 28 Jan 2016 19:06:05 +0000 http://drugmonkey.scientopia.org/?p=8605 In the NIH extramural grant funding world the maximum duration for a project is 5 years. It is possible at the end of a 5 year interval of support to apply to continue that project for another interval. The application for the next interval is competitively reviewed alongside of new project proposals in the relevant study sections, in general.

Comradde PhysioProffe addressed the continuation application at his Ftb joint. NIAID has a FAQ page.

The NIH Success Rate data shows that RPG success rates were 16.8% in 2013 and 18.1% in 2014. Comparable rates for competing continuation RPG applications were 35% in 2013 and 39% in 2014. So you can see why this is important.

I visited these themes before in a prior post. I think I covered most of the issues but in a slightly different way.

Today I want to try to get you folks to talk about prescriptives. How should a competing continuation / renewal NIH grant application be reviewed?

Now in my experience, the continuation application hinges on past-productivity in a way that a new application does not. Reviewers are explicitly considering the work that has been conducted under the support of the prior award. The application is supposed to include a list of publications that have resulted from the prior award. The application is supposed to detail a Progress Report that overviews what has been accomplished. So today I will be focusing on review mostly as it pertains to productivity. For reference, Berg's old post on the number of papers per grant dollar is here and shows an average output of 6 papers (IQR about 4-11) per $250K full modular award*.

Quoted bits are from my prior post.

Did you knock our socks off? This could be amazing ELEVENTY type findings, GlamourPub record (whether “expected” for your lab or not), unbelievably revolutionary advances, etc. If you have a record of this, nobody is going to think twice about what your Aims may have been. Probably won’t even give a hoot whether your work is a close match to the funding IC, for that matter.

We should probably separate these for discussion because after all, how often is a panel going to recognize a Nobel Prize type of publication has been supported by the award in the past 5 years? So maybe we should consider Glamour publications and amazing advances as two different scenarios. Are these going to push any renewal application over the hurdle for you even if the remaining items below are lacking? Does GlamMag substitute for direct attention to the experiments that were proposed or the Aims that guided the plan? In the extreme case, should we care if the work bears very little on the mission of the IC that has funded it?

Were you productive? Even if you didn’t WOW the world, if you’ve pumped out a respectable number of papers that have some discernible impact on a scientific field, you are in good shape. The more, the merrier. If you look “fabulously productive” and have contributed all kinds of interesting new science on the strength of your award(s), this is going to go down like gangbusters with the review panels. At this level of accomplishment you’d probably be safest at least be doing stuff that is vaguely in line with the IC that has funded your work.

Assuming that Glam may not be in the control of most PIs but that pedestrian, workaday scientific output is, should this be a major credit for the continuation application? We don't necessarily have to turn this into a LPU sausage-slicing discussion. Let's assume a quality of paper commensurate with the kind of work that most PIs with competitive applications in that particular study section publish. Meets the subfield standard. How important should raw productivity be?

Were you productive in addressing your overall goals? This is an important distinction from the Specific Aims. It is not necessary, in my view, that you hew closely to Aims first dreamed up 7 years prior to the conclusion of the actual study. But if you have moderate, or disappointing, productivity it is probably next most-helpful that you have published work related to the overall theme of the project. What was the big idea? What was mentioned in the first three sentences of your Specific Aims page? If you have published work related to this broad picture, that’s good.

This one is tricky. The reviewers do not have the prior grant application in front of them. They have the prior Summary Statement and the Abstract as published on RePORTER. It is a decent bet the prior Aims can be determined but broader themes may or may not come across. So for the most part if the applicant expects the reviewers to see that productivity has aligned with overarching programmatic goals, she has to tell them what those were. Presumably in the Progress Report part of the continuation application. How would you approach this as a reviewer? If the project wasn't overwhelmingly productive, didn't obviously address all of the Aims but at least generated some solid work along the general themes. Are you going to be satisfied? Or are you going to downgrade the failure to address each Aim? What if the project had to can an entire Aim or two? Would it matter? Is getting "stuck" in a single Aim a death knell when it comes time to review the next interval of support? As a related question if the same exact Aim has returned with the argument of "We didn't get to this in the past five years but it is still a good idea"? Neutral? Negative? AYFK?

Did you address your original Specific Aims? ...this can be a big obsession of certain reviewers. Not saying it isn’t a good idea to have papers that you can connect clearly to your prior Aims. ... A grant is not a contract. It is quite natural in the course of actual science that you will change your approaches and priorities for experiments. Maybe you’ve been beaten to the punch. Maybe your ongoing studies tell you that your original predictions were bad and you need to go in a whole new direction. Maybe the field as a whole has moved on. ... You might want to squeeze a drop out of a dry well to meet the “addressed Aims” criterion but maybe that money, effort and time would be better spent on a new direction which will lead to three pubs instead of one?

My original formulation of this isn't quite right for today's discussion. The last part is actually more relevant to the preceding point. For today, expand this to a continuation application that shows that the prior work essentially covers exactly what the application proposed. With data either published or included as ready-to-submit Preliminary Data in the renewal. Maybe this was accomplished with only a few papers in pedestrian journals (Lord knows just about every one of my manuscript reviews these days gets at least one critique that to calls for anywhere from 2 to 5 Specific Aims worth of data) so we're not talking about Glam or fabulous productivity. But should addressing all of the Aims and most if not all of the proposed experiments be enough? Is this a credit to a competing continuation application?

It will be unsurprising to you that by this point of my career, I've had competing continuation applications to which just about all of these scenarios apply, save Glam. We've had projects where we absolutely nailed everything we proposed to do. We've had projects get distracted/sidelined off onto a subsection of the proposal that nevertheless generated about the same number and quality of publications that would have otherwise resulted. We've had low productivity intervals of support that addressed all the Aims and ones that merely covered a subset of key themes. We've had projects with reasonably high productivity that have....wandered....from the specifics of the awarded proposal due to things that are happening in the subfield (including getting scooped). We've never been completely blanked on a project with zero related publications to my recollection, but we've had some very low productivity ones (albeit with excellent excuses).

I doubt we've ever had a perfect storm of sky-high productivity, all Aims addressed and the overarching themes satisfied. Certainly I have the review comments to suggest this**.

I have also been present during review panel discussions of continuation applications where reviewers have argued bitterly over the various productivity attributes of a prior interval of support. The "hugely productive" arguments are frequently over an application from a PI who has more than one award and tends to acknowledge more than one of them on each paper. This can also involve debates about so called "real scientific progress" versus papers published. This can be the Aims, the overall theme or just about the sneer of "they don't really do any interesting science".

I have for sure heard from people who are obsessed during review with whether each proposed experiment has been conducted (this was back in the days when summary statements could be fairly exhaustive and revealed what was in the prior application to a broader extent). More generally from reviewers who want to match publications up to the scope of the general scientific terrain described by the prior application.

I've also seen arguments about suggested controls or key additional experiments which were mentioned in the summary statement of the prior review, never addressed in the resulting publications and may still be a criticism of the renewal application.

Final question: Since the reviewers of the competing continuation see the prior summary statement, they see the score and percentile. Does this affect you as a reviewer? Should it? Especially if in your view this particular application should never have been funded at that score and is a likely a Programmatic pickup? Do you start steaming under the collar about special ESI paylines or bluehair/graybeard insider PO backslapping?

DISCLAMER: A per usual, I may have competing continuation applications under current or near-future review by NIH study sections. I am an interested party in how they are reviewed.
*This probably speaks to my point about how multi-award PIs attribute more than one grant on each paper. My experience has not been that people in my field view 5 papers published per interval of support (and remember the renewal application is submitted with the final year of funded support yet to go, if the project is to continue uninterrupted) as expected value. It is certainly not viewed as the kind of fabulous productivity that of course would justify continuing the project. It is more in line with the bare minimum***. Berg's data are per-grant-dollar of course and are not exactly the same as per-grant. But it is a close estimate. This blog post estimates "between 0.6 and 5 published papers per $100k in funding." which is one to 12 per year of a full-modular NIH R01. Big range and that high number seems nigh on impossible to me without other funding (like free trainee labor or data parasitism).

**and also a pronounced lack of success renewing projects to go with it.

***I do not personally agree. At the point of submitting a competing continuation in year 4 a brand new research program (whether b/c noob PI or very new lab direction) may have really only been rocking for 2 years. And large integrated projects like a big human subjects effort may not even have enrolled all the subjects yet. Breeding, longitudinal development studies, etc - there are many models that can all take a long time to get to the point of publishing data. These considerations play....let us say variably, with reviewers. IME.

http://drugmonkey.scientopia.org/2016/01/28/your-grant-in-review-competing-continuation-aka-renewal-apps/feed/ 15
Your Grant in Review: Skin in the Game http://drugmonkey.scientopia.org/2016/01/25/your-grant-in-review-skin-in-the-game/ http://drugmonkey.scientopia.org/2016/01/25/your-grant-in-review-skin-in-the-game/#comments Mon, 25 Jan 2016 16:36:31 +0000 http://drugmonkey.scientopia.org/?p=8603 Should people without skin in the game be allowed to review major research grants?

I mean those who are insulated from the results of the process. HHMI stalwarts, NIH intramural, national labs, company scientists...

On one hand, I see argument that they provide needed outside opinions. To keep an insular, self-congratulating process honest.

On the other, one might observe that those who cannot be punished for bad behavior have license to be biased, jerky and driven by personal agenda.


Would you prefer review by those who are subject to the funding system? Or doesn't it matter?

http://drugmonkey.scientopia.org/2016/01/25/your-grant-in-review-skin-in-the-game/feed/ 46