Archive for the 'Peer Review' category
I just can't understand what is valuable about showing that a 1%ile difference in voted score leads to 2% difference in total citations of papers attributed to that grant award. All discussions of whether NIH peer review is working or broken center on the supposed failure to fund meritorious grants and the alleged funding of non-meritorious grants.
Please show me one PI that is upset that her 4%ile funded grant really deserved a 2%ile and that shows that peer review is horribly broken.
The real issue, how a grant overlooked by the system would fare *were it to be funded* is actually addressed to some extent by the graph on citations to clearly outlying grants funded by exception.
This is cast as Program rescuing those rare exception brilliant proposal. But again, how do we know the ones that Program fails to rescue wouldn't have performed well?
Two years after your paper is published in Journal of SocietyB send the citation report showing that it quadrupled the JIF of the JournalA that rejected it to the rejecting Editor.
Let's make this a thing, people.
Yesterday's review of the research publications of a person who had written about closing down his lab due to lack of funding in this "unfair" grant award environment touched a nerve on at least one Reader. I assume it was uncomfortable for many of you to read.
It was uncomfortable for me to write.
You can tell because I felt compelled to slip in the odd caveat about my own record. I can write one of those reviews about my own career that would be equally, if not more, critical and uncomfortable.
No doubt more than one of you got the feeling that if I wrote a similar review of your record you would come up wanting ...or at least PhysioProffe would jump in to tell you how shitasse you are*. Certainly at least one correspondent expressed this feeling.
But that tinge of anxiety, fear and possibly shame that you feel should tell you that it is a good idea to perform this little review of yourself now and again. Good to try to step outside of your usual excuses to yourself and see how your CV looks to the dispassionate observer who doesn't know anything about your career other than the publication and NIH-award (or other grants, as relevant) record.
Do you have obvious weaknesses? Too few publications? Too few first/last author (as appropriate). Too few collaborations? Insufficiently high Journal Impact Factor points? Etc.
What is all of this going to say to grant reviewers, hiring committees or promotions committees?
Then, this allows you to do something about it. You can't change the past but you can alter the course of your future.
In some situations, like crafting the NIH Biosketch Personal Statement, you do actually have the opportunity to alter the past....not the reality but certainly the perception of it. So that is another place where the review of your CV helps. That voice of excuse-making that arises? Leverage that. You DO have reasons for certain weaknesses and perhaps other features of your CV help to overcome that if they are just pointed out properly.
*he wouldn't, btw.
In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.
He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.
Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.
The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .
As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:
48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.
These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.
This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?
Anyway, why focus on these folks?
I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.
So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.
So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.
Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.
Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.
McKnight closes with this:
I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.
Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".
This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.
McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?
challdreams wrote on rejection.
These things may or may not be part of your personal life, where rejection rears its head at times and you are left to deal with the fall out. And that type of rejection is seldom based on "your writing" but rather on "you as a person" or "things you did", which is a little more personal and a little harder to 'accept and get back on the horse'.
It made me think of how I try to write criticisms of manuscripts that focus on the document in front of me. The data provided and the interpretations advanced. The hypothesis framed.
I try to write criticisms about whether the data as presented do and do not support the claims. The claims as advanced by the authors. This keeps me away, I hope, from saying the authors are wrong, that their experimental skills are deficient or that they are stupid.
It can be all a matter of phrasing because often the authors hear "you are stupid" when this is not at all what the reviewer thinks she is saying.
this has been bopping around on the Twitts lately..
Shit --- - F. Cesari / Nature pic.twitter.com/8vlpksZZHc
— Sarah (@Drosophilista) March 10, 2015
I am motivated to once again point something out.
In ALL of my advice to submit grant applications to the NIH frequently and on a diversity of topic angles, there is one fundamental assumption.
That you always, always, always send in a credible application.
That is all.
One inexorable rule of the jungle...err, savannah, is that the predatory carnivore that takes out the old, the slow and the weak members of the herd is actually doing a favor for Wildebeest kind.
Somewhere in there is a lesson for the study section reviewer.