Shorthand

(by drugmonkey) Apr 22 2016

Storyboard

Pretty data

N-up

Prove the hypothesis

Representative image

Trend for significance

Different subcultures of science may use certain phrases that send people in other traditions into paroxysms of critique.

Mostly it is because such phrasing can sound like bad science. As if the person using it doesn't understand how dangerous and horrible their thinking is. 

We've gone a few rounds over storyboarding and representative images in the past. 

Today's topic is "n-up", which is deployed, I surmise, after examining a few results, replicates or subjects that look promising for what the lab would prefer to be so. It raises my hackles. It smells to me like a recipe for confirmation bias and false alarming. To me.

Apparently this is normal phrasing for other people and merely indicates the pilot study is complete? 

How do you use the phrase?

33 responses so far

Genius, musician, showman and one hell of an axeman. RIP.

(by drugmonkey) Apr 21 2016

He'll be missed.

3 responses so far

Abortion is more humane than child neglect

(by drugmonkey) Apr 20 2016

jmz4 asks:

DM, what's your reasoning behind advocating for reducing grad student numbers instead of just bottlenecking at the PD phase? I'd argue that grad students currently get a pretty good deal (free degree and reasonable stipend), and so are less exploited. Also, scientific training is useful in many other endeavors, and so the net benefit to society is to continue training grad students.

My short answer is that it is more humane.
Continue Reading »

92 responses so far

NIA paylines and anti-ESI bias of review

(by drugmonkey) Apr 20 2016

MillerLab noted on the twitters that the NIA has released it's new paylines for FY2016. If your grant proposal scores within the 9%ile zone, congrats! Unless you happen to be an Early Stage Investigator in which case you only have to score within the top 19% of applications, woot!

I was just discussing the continuing nature of the ESI bias in a comment exchange with Ferric Fang on another thread. He thinks

The problem that new investigators have in obtaining funding is not necessarily a result of bias but rather that it is more challenging for new investigators to write applications that are competitive with those of established investigators because as newcomers, they have less data and fewer accomplishments to cite.

and I disagree, viewing this as assuredly a bias in review. The push to equalize success rates of ESI applicants with those of established investigators (generational screw-job that it is) started back in 2007 with prior NIH Director Elias Zerhouni. The mechanism to accomplish this goal was, and continues to be, naked quota based affirmative action. NIH will fund ESI applications out of the order of review until they reach approximately the same success percentages as is enjoyed by the established investigator applications. Some ICs are able to game this out predictively by using different paylines- the percentile ranks within which almost all grants will be funded.

NIA-fundingpolicyAs mentioned, NIA has to use a 19%ile cutoff for ESI applications to equal a 9%ile cutoff for established investigator applications. This got me thinking about the origin of the ESI policies in 2007 and the ensuing trends. Luckily, the NIA publishes its funding policy on the website here. The formal ESI policy at NIA apparently didn't kick in until 2009, from what I can tell. What I am graphing here are the paylines used by NIA by fiscal year to select Exp(erienced), ESI and New Investigator (NI) applications for funding.

It's pretty obvious that the review bias against ESI applications continues essentially unabated*. All the talk about "eating our seed corn", the hand wringing about a lost generation, the clear signal that NIH wanted to fund the noobs at equivalent rates as the older folks....all fell on deaf ears as far as the reviewers are concerned. The quotas for the ESI affirmative action are still needed to accomplish the goal of equalizing success rates.

I find this interesting.

__
*Zerhouni noted right away [PDF] that study sections were fighting back against the affirmative action policy for ESI applications.

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni.

Note: It is probably only a coincidence that CSR reduced the number of first time reviewers in FY2014, FY2015 relative to the three prior FYs.

16 responses so far

On removing grant deadlines

(by drugmonkey) Apr 20 2016

Eric Hand reported in Science that one NSF pilot program found that allowing for any-time submission reduced applications numbers.

Assistant Director for Geosciences Roger Wakimoto revealed the preliminary results from a pilot program that got rid of grant proposal deadlines in favor of an anytime submission. The numbers were staggering. Across four grant programs, proposals dropped by 59% after deadlines were eliminated.

I have been bombarded with links to this article/finding and queries as to what I think.

Pretty much nothing.

I do know that NIH has been increasingly liberal with allowing past-deadline submissions from PIs who have served on study section. So there is probably a data source to draw upon inside CSR if they care to examine it.

I do not know if this would do anything similar if applied to the NIH.

The NSF pilot was for

geobiology and low-temperature geochemistry, geomorphology and land-use dynamics, hydrological sciences, and sedimentary geology and paleobiology.

According to the article these are fields in which

"many scientists do field work, having no deadline makes it easier for collaborators to schedule time when they can work on a proposal".

This field work bit is not generally true of the NIH extramural community. I think it obvious that continual-submission helps to schedule time but I would note that it also eliminates a stick for the more proactive members of a collaboration to beat the slaggards into line. As a guy who hits his deadlines for grant submission, it's probably in my interest to further lower the encouragements the lower-energy folks require.

According to a geologist familiar with reviewing these grants

The switch is “going to filter for the most highly motivated people, and the ideas for which you feel the most passion,” he predicts. When he sits on merit review panels, he finds that he can usually reject half of the proposals right away as being hasty or ill-considered. “My hope is that this has taken off the bottom 50%,” he says. “Those are the ones you read and say, ‘Did they have their heart in this?’”

Personally I see very few NIH grant proposals that appear to me to be "hasty or ill-considered" or cause me to doubt the PI has her heart in it. And you know how I feel about the proposition that the RealProblem with NIH grant success hinges on whether or not PIs refine and hone and polish their applications into some shining gem of a document. Applications are down so therefore success rates go up is the only thing we need to take away from this pilot, if you ask me. Any method by which you could decrease NIH applications would likewise seem to improve success rates.

Would it work for NIH types? I tend to doubt it. That program at NSF started with only two submission rounds per year. NIH has three rounds for funding per year, but this results from a multitude of deadlines including new R01, new R21/R03, two more for the revised apps, special ones for AIDS-related, RFAs and assorted other mechanisms. As I mentioned above, if you review for the NIH (including Advisory Council service) you get an extra extension to submit for a given decision round.

The pressure for most of us to hit any specific NIH deadline during the year is, I would argue, much lower at baseline. So if the theory is that NSF types were pressured to submit junky applications because their next opportunity was so far away....this doesn't apply to NIH folks.

6 responses so far

NIH Grant Lottery

(by drugmonkey) Apr 18 2016

Fang and Casadevall have a new piece up that advocates turning NIH grant selection into a modified lottery. There is a lot of the usual dissection of the flaws of the NIH grant funding system here, dressed up as "the Case for" but really they don't actually make any specific or compelling argument, beyond "it's broken, here's our RealSolution".

we instead suggest a two-stage system in which (i) meritorious applications are identified by peer review and (ii) funding decisions are made on the basis of a computer-generated lottery . The size of the meritorious pool could be adjusted according to the payline. For example, if the payline is 10%, then the size of the meritorious pool might be expected to include the top 20 to 30% of applications identified by peer review.

They envision eliminating the face to face discussion to arrive at the qualified pool of applications:

Critiques would be issued only for grants that are considered nonmeritorious, eliminating the need for face-to-face study section meetings to argue over rankings,

Whoa back up. Under current NIH review, critiques are not a result of the face-to-face meeting. This is not the "need" for meeting to discuss the applications. They are misguided in a very severe and fundamental way about this. Discussion serves, ideally, to calibrate individual review, to catch errors, to harmonize disparate opinions, to refine the scoring....but in the majority of cases the written critiques are not changed a whole lot by the process and the resume of the discussion is a minor outcome.

Still, this is a minor point of my concern with their argument.

Let us turn to the juxtaposition of

New investigators could compete in a separate lottery with a higher payline to ensure that a specific portion of funding is dedicated to this group or could be given increased representation in the regular lottery to improve their chances of funding.

with

we emphasize that the primary advantage of a modified lottery would be to make the system fairer by eliminating sources of bias. The proposed system should improve research workforce diversity, as any female or underrepresented minority applicant who submits a meritorious application will have an equal chance of being awarded funding.

Huh? If this lottery is going to magically eliminate bias against female or URM applicants, why is it going to fail to eliminate bias against new investigators? I smell a disingenuous appeal to fairness for the traditionally disadvantaged as a cynical ploy to get people on board with their lottery plan. The comment about new investigators shows that they know full well it will not actually address review bias.

Their plan uses a cutoff. 20%, 30%...something. No matter what that cutoff line is, reviewers will know something about where it lies. And they will review/score grants accordingly. Just Zerhouni noted that when news of special ESI paylines got around, study sections immediately started giving ESI applications even worse scores. If there is a bias today that pushes new investigator, woman or URM PI's applications outside of the funding, there will be a bias tomorrow that keeps them disproportionately outside of the Fang/Casadevall lottery pool.

There is a part of their plan that I am really unclear on and it is critical to the intended outcome.

Applications that are not chosen would become eligible for the next drawing in 4 months, but individual researchers would be permitted to enter only one application per drawing, which would reduce the need to revise currently meritorious applications that are not funded and free scientists to do more research instead of rewriting grant applications.

This sounds suspiciously similar to a plan that I advanced some time ago. This post from 2008 was mostly responding to the revision-queuing behavior of study sections.

So this brings me back to my usual proposal of which I am increasingly fond. The ICs should set a "desired" funding target consistent with their historical performance, say 24% of applications, for each Council round. When they do not have enough budget to cover this many applications in a given round, they should roll the applications that missed the cut into the next round. Then starting the next Council round they should apportion some fraction of their grant pickups to the applications from the prior rounds that were sufficiently meritorious from a historical perspective. Perhaps half roll-over and half from the current round of submissions. That way, there would still be some room for really outstanding -01 apps to shoulder their way into funding.

The great part is that essentially nothing would change. The A2 app that is funded is not going to result in scientific conduct that differs in any substantial way from the science that would have resulted from the A1/15%ile app being funded. New apps will not be any more disadvantaged by sharing the funding pie with prior rounds than they currently are facing revision-status-bias at the point of study section review.

What I am unclear on in the Fang/Casadevall proposal is the limit to one application "per drawing". Is this per council round per IC? Per study section per Council round per IC? NIH-wide? Would the PI be able to stack up potentially-meritorious apps that go unfunded so that the get considered in series across many successive rounds of lotteries?

These questions address their underlying assumption that a lottery is "fair". It boils down to the question of whether everyone is equally able to buy the same number of lottery tickets.

The authors also have to let in quite reasonable exceptions:

Furthermore, we note that program officers could still use selective pay mechanisms to fund individuals who consistently make the lottery but fail to receive funding or in the unlikely instance that important fields become underfunded due to the vagaries of luck.

So how is this any different from what we have now? Program Officers are already trusted to right the wrongs of the tyranny of peer review. Arguing for this lottery system implies that you think that PO flexibility on exception funding is either insufficient or part of the problem. So why let it back into the scheme?

Next, the authors stumble with a naked assertion

The proposed system would treat new and competing renewal applications in the same manner. Historically, competing applications have enjoyed higher success rates than new applications, for reasons including that these applications are from established investigators with a track record of productivity. However, we find no compelling reason to justify supporting established programs over new programs.

that is highly personal. I find many compelling reasons to justify supporting established programs. And many compelling reasons not to do so preferentially. And many compelling reasons to demand a higher standard, or to ban them entirely. I suspect many of the participants in the NIH system also favor one or the other of the different viewpoints on this issue. What I find to be unconvincing is nakedly asserting this "we find no compelling reason" as if there is not any reasonable discussion space on the issue. There most assuredly is.

Finally, the authors appeal to a historical example with is laughably bad for their argument:

we note that lotteries are already used by society to make difficult decisions. Historically, a lottery was used in the draft for service in the armed forces...If lotteries could be used to select those who served in Vietnam, they can certainly be used to choose proposals for funding.

As anyone who pays even the slightest attention realizes, the Vietnam era selective service lottery in the US was hugely biased and subverted by the better-off and more-powerful to keep their offspring safe. A higher burden was borne by the children of the lower classes, the unconnected and, as if we need to say it, ethnic minorities. Referring to this example may not be the best argument for your case, guys.

45 responses so far

Ridiculous shit I actually say

(by drugmonkey) Apr 15 2016

HAHAHHHA. I am so full of myself  today 
I actually said this 

It's like cult rescue though. You don't try to rehab the head, you try to get the innocents out b4 the FlavorAde is poured

(Yes, it was a discussion of Glamour culture of science. As if you couldn't guess.)

5 responses so far

Representative Images

(by drugmonkey) Apr 15 2016

New rule: Claims of a "representative" image should have to be supported by submission of 2 better ones that were not included.

It works like this.

Line up your 9 images that were quantified for the real analysis of the outcome. In the order by which they appear to follow your desired interpretation of the mean effect.

Your "representative" image is #5. So you should have to prove your claim to have presented a representative image in peer review by providing #8 and #9.

My prediction is that the population of published image data would get a lot uglier, less "clear" and would more accurately reflect reality.

55 responses so far

Is the fact you reviewed this manuscript before confidential?

(by drugmonkey) Apr 15 2016

Interesting comment from AnonNeuro:

Reviews are confidential, so I don't think you can share that information. Saying "I'll review it again" is the same as saying "I have insider knowledge that this paper was rejected elsewhere". Better to decline the review due to conflict.

I don't think I've ever followed this as a rule. I have definitely told editors when the manuscript has not been revised from a previously critiqued version in the past (I don't say which journal had rejected the authors' work). But I can't say that I invariably mention it either. If the manuscript had been revised somewhat, why bother. If I like it and want to see it published, mentioning I've seen a prior version elsewhere seems counterproductive.

This comment had me pondering my lack of a clear policy.

Maybe we should tell the editor upon accepting the review assignment so that they can decide if they still want our input?

28 responses so far

Revise After Rejection

(by drugmonkey) Apr 14 2016

This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.

There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.

It's the sort of thing you only need to do as a response to review when it has a chance of acceptance.

Outrageous claims that are going to be bait for any reviewer? Sure, back those down.

17 responses so far

« Newer posts Older posts »