Archive for the 'Staring in Disbelief' category

Agreement among NIH grant reviewers

Pier and colleagues recently published a study purporting to address the reliabiliy of the NIH peer review process. From the summary:

We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

emphasis added.

This thing is a crock and yet it has been bandied about on the Twitts as if it is the most awesome thing ever. "Aha!" cry the disgruntled applicants, "This proves that NIH peer review is horrible, terrible, no good, very bad and needs to be torn down entirely. Oh, and it also proves that it is a super criminal crime that some of my applications have gone unfunded, wah."

A smaller set of voices expressed perplexed confusion. "Weird", we say, "but probably our greatest impression from serving on panels is that there is great agreement of review, when you consider the process as a whole."

So, why is the study irretrievably flawed? In broad strokes it is quite simple.
Restriction of the range. Take a look at the first figure. Does it show any correlation of scores? Any fair view would say no. Aha! Whatever is being represented on the x-axis about these points does not predict anything about what is being represented on the y-axis.

This is the mistake being made by Pier and colleagues. They have constructed four peer-review panels and had them review the same population of 25 grants. The trick is that of these 16 were already funded by the NCI and the remaining 9 were prior unfunded versions of grants that were funded by the NCI.

In short, the study selects proposals from a very limited range of the applications being reviewed by the NIH. This figure shows the rest of the data from the above example. When you look at it like this, any fair eye concludes that whatever is being represented by the x value about these points predicts something about the y value. Anyone with the barest of understanding of distributions and correlations gets this. Anyone with the most basic understanding grasps that a distribution does not have to have perfect correspondence for there to be a predictive relationship between two variables.

So. The authors claims are bogus. Ridiculously so. They did not "replicate" the peer review because they did not include a full range of scores/outcomes but instead picked the narrowest slice of the funded awards. I don't have time to dig up historical data but the current funding plan for NCI calls for a 10%ile payline. You can amuse yourself with the NIH success rate data here, the very first spreadsheet I clicked on gave a success rate of 12.5% for NCI R01s.

No "agreement". "Subjectivity". Well of course not. We expect there to be variation in the subjective evaluation of grants. Oh yes, "subjective". Anyone that pretends this process is "objective" is an idiot. Underinformed. Willfully in denial. Review by human is a "subjective" process by its very definition. That is what it means.

The only debate here is how much variability we expect there to be. How much precision do we expect in the process.

The most fervent defenders of the general reliability of the NIH grant peer review process almost invariably will acknowledge that the precision of the system is not high. That the "top-[insert favored value of 2-3 times the current paylines]" scoring grants are all worthy of funding and have very little objective space between them.

Yet we still seem to see this disgruntled applicant phenotype, responding with raucous applause to a crock of crap conclusion like that of Pier and colleagues, that seem to feel that somehow it is possible to have a grant evaluation system that is perfect. That returns the exact same score for a given proposal each and every time*. I just don't understand these people.
__
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford and Molly Carnes, Low agreement among reviewers evaluating the same NIH grant applications. 2018, PNAS: published ahead of print March 5, 2018, https://doi.org/10.1073/pnas.1714379115

*And we're not even getting into the fact that science moves forward and that what is cool today is not necessarily anywhere near as cool tomorrow

21 responses so far

Ruining scholarship, one bad mentor at a time

via comment from A Salty Scientist:

Gnosis:

When you search for papers on PubMed, it usually gives the results in chronological order so many new but irrelevant papers are on the top. When you search papers on Google Scholar, it usually gives results ranked by citations, so will miss the newest exciting finding. Students in my lab recently made a very simple but useful tool Gnosis. It ranks all the PubMed hits by (Impact Factor of the journal + Year), so you get the newest and most important papers first.

Emphasis added, as if I need to. You see, relevant and important papers are indexed by the journal impact factor. Of course.

25 responses so far

How AAAS and Science magazine really feel about sexual harassment cases in science

Michael Balter wrote a piece about sexual harassment accusations against paleoanthropologist Brian Richmond, the curator of human origins at the American Museum of Natural History that was published in Science magazine.

This story has been part of what I hope is a critical mass of stories publicizing sexual harassment in academia. Critical, that is, to stimulating real improvement in workplaces and a decrease in tolerance for sexual harassing behavior on the part of established scientists toward their underlings.

There have been a very disturbing series of tweets from Balter today.

Holy....surely it isn't connected to....

Oh Christ, of course it is....

but they published it so...?

Well THAT should have a nicely suppressing effect on journalists who may think about writing up any future cases of sexual harassment in academia.

UPDATE: Blog entry from Balter.
__
ETA: I am particularly exercised about this after completing, just this week, a survey from AAAS about what the membership expects from them. The survey did not seem to have a check box item for "Fight against scientific and workplace misconduct".

36 responses so far

Birds of a feather...

Sep 22 2015 Published by under Staring in Disbelief

Some of you may have been following the news about venture capitalist Martin Shkreli who decided

to raise the price of toxoplasmosis drug Daraprim from $13.50 a pill to $750.

Mr. Shkreli has gone on to enrage basically everybody by defending his moves on social media and traditional media with, shall we say, aplomb.

Then one of the scitweeps remembered something interesting:

What?

Yep.

Of the $2 million seed money, New York-based Retrophin and the Wilsey family foundation in San Francisco have combined to contribute about one-third. The rest has come from angel funders in increments of $10,000 to $400,000, Perlstein says.

Perlstein first caught Retrophin CEO Martin Shkreli’s attention on Twitter, and their exchange led to a meeting at the J.P. Morgan conference in San Francisco.

Sounds like the start of a beautiful relationship.

23 responses so far

The saddest thing I have read on the internet in quite some time

Mar 12 2015 Published by under Staring in Disbelief

19 responses so far

Ignorance is dangerous when it comes to Journal Impact Factor

A Twitt by someone who appears to be a postdoc brought me up short.

@mbeisen @neuromusic @drisis @devinberg Does this mean I an screwed since I have NO FREAKING CLUE what the IF are of journals I publish in?!

HOLY CANOLI!

A followup from @mrhunsaker wasn't much better.

@drisis @mbeisen @neuromusic @devinberg I agree that high IF is demanded. I'm constantly asked to find a Higher Impact co-author & I refuse

What this even means I do not know*. A "Higher Impact co-author"? What? Maybe this means collaborate with someone doing something that is going to get your own work into a higher IF journal? Anyway....

The main point here is that no matter your position on the Journal Impact Factor, no matter the subfield of biomedical science in which you reside, no matter the nature of your questions, models and data...it is absolutely not okay to not understand the implications of the IF. Particularly by the time you are a postdoc.

You absolutely need to understand the IF of journals you publish in, people in your subfield publish in and that people who will be judging you publish in. You need to understand the range, what represents a bit of a stretch for your work, what is your bread-and-butter zone and what is a dump journal.

If your mentors and fellow (more senior) trainees are not bringing you up to speed on this stuff they are committing mentoring malpractice.

__
*UPDATE: apparently this person meant for text book chapters and review articles that editors were suggesting a more senior person should be involved. Different issue....but the phrasing as "higher impact" co-author is disturbing.

2 responses so far

A little reminder of why we have IRBs. Did I mention it is still Black History Month?

Reputable citizen-journalist Comradde PhysioProffe has been investigating the doings of a citizen science project, ubiome. Melissa of The Boundary Layer blog has nicely explicated the concerns about citizen science that uses human subjects.

And this brings me to what I believe to be the potentially dubious ethics of this citizen science project. One of the first questions I ask when I see any scientific project involving collecting data from humans is, “What institutional review board (IRB) is monitoring this project?” An IRB is a group that is specifically charged with protecting the rights of human research participants. The legal framework that dictates the necessary use of an IRB for any project receiving federal funding or affiliated with an investigational new drug application stems from the major abuses perpetrated by Nazi physicians during Word War II and scientists and physicians affiliated with the Tuskegee experiments. The work that I have conducted while affiliated with universities and with pharmaceutical companies has all been overseen by an IRB. I will certainly concede to all of you that the IRB process is not perfect, but I do believe that it is a necessary and largely beneficial process.

My immediate thought was about those citizen scientist, crowd-funded projects that might happen to want to work with vertebrate animals.

I wonder how this would be received:

“We’ve given extensive thought to our use of stray cats for invasive electrophysiology experiments in our crowd funded garage startup neuroscience lab. We even thought really hard about IACUC approvals and look forward to an open dialog as we move forward with our recordings. Luckily, the cats supply consent when they enter the garage in search of the can of tuna we open every morning at 6am.”

Anyway, in citizen-journalist PhysioProffe's investigations he has linked up with an amazing citizen-IRB-enthusiast. A sample from this latter's recent guest post on the former's blog blogge.

Then in 1972, a scandal erupted over the Tuskegee syphilis experiment. This study, started in 1932 by the US Public Health Service, recruited 600 poor African-American tenant farmers in Macon County, Alabama: 201 of them were healthy and 399 had syphilis, which at the time was incurable. The purpose of the study was to try out treatments on what even the US government admitted to be a powerless, desperate demographic. Neither the men nor their partners were told that they had a terminal STD; instead, the sick men were told they had “bad blood” — a folk term with no basis in science — and that they would get free medical care for themselves and their families, plus burial insurance (i.e., a grave plot, casket and funeral), for helping to find a cure.

When penicillin was discovered, and found in 1947 to be a total cure for syphilis, the focus of the study changed from trying to find a cure to documenting the progress of the disease from its early stages through termination. The men and their partners were not given penicillin, as that would interfere with the new purpose: instead, the government watched them die a slow, horrific death as they developed tumors and the spirochete destroyed their brains and central nervous system. Those who wanted out of the study, or who had heard of this new miracle drug and wanted it, were told that dropping out meant paying back the cost of decades of medical care, a sum that was far beyond anything a sharecropper could come up with.

CDC: U.S. Public Health Service Syphilis Study at Tuskegee
NPR: Remembering Tuskegee
PubMed: Syphilitic Gumma

31 responses so far

Sigh.

Aug 17 2012 Published by under Staring in Disbelief

via Crommunist.

4 responses so far

A lasting record of your achievement....just another value added by Elsevier!

Apr 16 2012 Published by under Scientific Publication, Staring in Disbelief

I am still not entirely sure they are not kidding with this. Apparently for a mere $39.95 (plus tax and shipment) you can get a framed certificate which marks the publication of your article in one of the academic journals published by Elsevier.

Certificate of publication

A lasting record of scientific achievement, this Certificate of Publication is delivered ready to display in a high-quality frame, dark brown wood with gold trim.

You may also want to purchase a poster [$28.95, plus tax and shipment] or make a book of all your favorite articles [prices start at $50...plus tax and shipment].

[h/t: @FakeElsevier]

6 responses so far

Thanks for dying!

Apr 13 2012 Published by under Society for Neuroscience, Staring in Disbelief

The new SfN award, named for the legendary Particia Goldman-Rakic, honors dead people.

That's right, the site emphasizes that it is a posthumous award for scientists who were fabulous, supported women in science, were active in SfN or other academic organizations....all that good stuff.

Plus, dead. Not living. A sort of ex-scientist.
This is nuts.

Honor people while they are still alive. If someone dies tragically early, sure make the award posthumously. But let's put our focus on recognizing people while they can still receive the accolades.

13 responses so far

Older posts »