Archive for the 'Peer Review' category

That study of peer review perplexes me

Apr 24 2015 Published by under Grant Review, NIH, NIH Careerism, Peer Review

I just can't understand what is valuable about showing that a 1%ile difference in voted score leads to 2% difference in total citations of papers attributed to that grant award. All discussions of whether NIH peer review is working or broken center on the supposed failure to fund meritorious grants and the alleged funding of non-meritorious grants. 

Please show me one PI that is upset that her 4%ile funded grant really deserved a 2%ile and that shows that peer review is horribly broken. 

The real issue, how a grant overlooked by the system would fare *were it to be funded* is actually addressed to some extent by the graph on citations to clearly outlying grants funded by exception.

This is cast as Program rescuing those rare exception brilliant proposal. But again, how do we know the ones that Program fails to rescue wouldn't have performed well?

12 responses so far

New plan for angry authors

Two years after your paper is published in Journal of SocietyB send the citation report showing that it quadrupled the JIF of the JournalA that rejected it to the rejecting Editor. 

Let's make this a thing, people. 

25 responses so far

Review your own CV. Frequently.

Apr 14 2015 Published by under Careerism, NIH, Peer Review

Yesterday's review of the research publications of a person who had written about closing down his lab due to lack of funding in this "unfair" grant award environment touched a nerve on at least one Reader. I assume it was uncomfortable for many of you to read.

It was uncomfortable for me to write.

You can tell because I felt compelled to slip in the odd caveat about my own record. I can write one of those reviews about my own career that would be equally, if not more, critical and uncomfortable.

No doubt more than one of you got the feeling that if I wrote a similar review of your record you would come up wanting ...or at least PhysioProffe would jump in to tell you how shitasse you are*. Certainly at least one correspondent expressed this feeling.

But that tinge of anxiety, fear and possibly shame that you feel should tell you that it is a good idea to perform this little review of yourself now and again. Good to try to step outside of your usual excuses to yourself and see how your CV looks to the dispassionate observer who doesn't know anything about your career other than the publication and NIH-award (or other grants, as relevant) record.

Do you have obvious weaknesses? Too few publications? Too few first/last author (as appropriate). Too few collaborations? Insufficiently high Journal Impact Factor points? Etc.

What is all of this going to say to grant reviewers, hiring committees or promotions committees?

Then, this allows you to do something about it. You can't change the past but you can alter the course of your future.

In some situations, like crafting the NIH Biosketch Personal Statement, you do actually have the opportunity to alter the past....not the reality but certainly the perception of it. So that is another place where the review of your CV helps. That voice of excuse-making that arises? Leverage that. You DO have reasons for certain weaknesses and perhaps other features of your CV help to overcome that if they are just pointed out properly.

___
*he wouldn't, btw.

32 responses so far

McKnight posts an analysis of NIH peer review

Apr 08 2015 Published by under NIH, NIH Budgets and Economics, NIH funding, Peer Review

Sortof.

In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.

He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.

Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.

The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .

As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:

48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.

These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.

This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?

Anyway, why focus on these folks?

I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.

So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.

So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.


Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.

Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.

McKnight closes with this:

I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.

Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".

This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.

McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?

h/t: philapodia

48 responses so far

The golden rule of peer review

Apr 04 2015 Published by under Conduct of Science, Peer Review

Review as you would wish to be reviewed.

14 responses so far

Critique writing: Manuscript review

Apr 02 2015 Published by under Peer Review

challdreams wrote on rejection.

 These things may or may not be part of your personal life, where rejection rears its head at times and you are left to deal with the fall out. And that type of rejection is seldom based on "your writing" but rather on "you as a person" or "things you did", which is a little more personal and a little harder to 'accept and get back on the horse'.

It made me think of how I try to write criticisms of manuscripts that focus on the document in front of me. The data provided and the interpretations advanced. The hypothesis framed.

I try to write criticisms about whether  the data as presented do and do not support the claims. The claims as advanced by the authors. This keeps me away, I hope, from saying the authors are wrong, that their experimental skills are deficient or that they are stupid.

It can be all a matter of phrasing because often the authors hear "you are stupid" when this is not at all what the reviewer thinks she is saying. 

22 responses so far

Dear Authors, Don't do this. Ever.

this has been bopping around on the Twitts lately..

6 responses so far

Your Grant in Review: Credible

Jan 30 2015 Published by under NIH, NIH Careerism, NIH funding, Peer Review

I am motivated to once again point something out.

In ALL of my advice to submit grant applications to the NIH frequently and on a diversity of topic angles, there is one fundamental assumption.

That you always, always, always send in a credible application.

That is all.

17 responses so far

Occasionally, it doesn't go so well for Reviewer #3

Jan 29 2015 Published by under Peer Review


BikeMonkey Post

One inexorable rule of the jungle...err, savannah, is that the predatory carnivore that takes out the old, the slow and the weak members of the herd is actually doing a favor for Wildebeest kind.

Somewhere in there is a lesson for the study section reviewer.

Continue Reading »

6 responses so far

Are your journals permitting only one "major revision" round?

Skeptic noted the following on a prior post:

First time submitted to JN. Submitted revision with additional experiments. The editor sent the paper to a new reviewer and he/she asks additional experiments. In the editor's word, "he has to reject the paper because this was the revision."

This echoes something I have only recently heard about from a peer. Namely that a journal editor said that a manuscript was being rejected due to* it being policy not to permit multiple rounds of revision after a "major revisions" decision.

The implications are curious. I have not yet ever been told by a journal editor that this is their policy when I have been asked to review a manuscript.

I will, now and again, give a second recommendation for Major Revisions if I feel like the authors are not really taking my points to heart after the first round. I may even switch from Minor Revisions to Major Revisions in such a case.

Obviously, since I didn't select the "Reject" option in these cases, I didn't make my review thinking that my recommendation was in fact a "Reject" instead of the "Major Revisions".

I am bothered by this. It seems that journals are probably adopting these policies because they can, i.e., they get far more submissions than they can print. So one way to go about triaging the avalanche is to assume that manuscripts that require more than one round of fighting over revisions can be readily discarded. But this ignores the intent of the peer reviewer to large extent.

Well, now that I know this about two journals for which I review, I will adjust my behavior accordingly. I will understand that a recommendation of "Major Revisions" on the revised version of the manuscript will be interpreted by the Editor as "Reject" and I will supply the recommendation that I intend.

Is anyone else hearing these policies from journals in their fields?
__
*having been around the block a time or two I hypothesize that, whether stated or not, those priority ratings that peer reviewers are asked to supply have something to do with these decisions as well. The authors generally only see the comments and may have no idea that that "favorable" reviewer who didn't find much of fault with the manuscript gave them a big old "booooooring" on the priority rating.

47 responses so far

Older posts »