Who do you select when listing potential reviewers for your manuscripts?
I go for suggestions that I think will be favorably inclined toward acceptance. This may be primarily because they work on similar stuff (otherwise they aren't going to be engaged at all) but also because I think* they are favorable towards my laboratory.
(I have also taken to making sure I suggest at least 50% women but that is a different matter.)
I wouldn't suggest anyone that violates the clearest statement of automatic COI that pertains to me, i.e. the NIH grant review 3-year window of collaboration.
Where do you get your standards?
*I could always be wrong of course
The life of the academic scientist includes responding to criticism of their ideas, experimental techniques and results, interpretations and theoretical orientations*.
This comes up pointedly and formally in the submission of manuscripts for potential publication and in the submission of grant applications for potential funding.
There is an original submission, a return of detailed critical comments and an opportunity to respond to those critiques with revisions to the manuscript / grant application and/or argumentative rebuttal.
As I have said repeatedly in this forum, one of my most formative scientific mentors told me that you should take each and every comment seriously. Consider what is being said, why it is being said and try to respond accordingly. This mentor told me that I would usually find that by considering even the most idiotic seeming comments seriously, the manuscript (or grant application) is improved.
I have found this to be a universal truth of my professional work.
My understanding of what I was told by my mentor, versus what I have filled in additionally in my similar comments to my own trainees is now very fuzzy. I cannot remember exactly how extensively this mentor stamped down what is now my current understanding. For example, it is helpful to me to consider that Reviewer #3 represents about 33% of peers instead of thinking of this person as the rare outlier. I think that one may be my own formulation. Regardless of the relative contributions of my mentor versus my lived experience, it is all REALLY valuable advice that I have internalized.
The paper and grant review process is not there, by any means, to prove to you beyond a shadow of a doubt** that the reviewer's position is correct and you are wrong. A reviewer that provides citations for a criticism is not by any means the majority of my experience...although you will see this occasionally. Even there, you could always engage cited statements from an antagonistic default setting. This is unwise.
The upshot of this critique-not-proof system means that as a professional, you have to be able to argue against yourself in proxy for the reviewer. This is why I say you need to consider each comment thoughtfully and try to imagine where it is coming from and what the person is really saying to you. Assume that they are acting in good faith instead of reflexively jumping behind paranoid suspicions that they are just out to get you for nefarious purposes.
This helps you to critically evaluate your own product.
Ultimately, you are the one that knows your product best, so you are the one in position to most thoroughly locate the flaws. In a lot of ways, nobody else can do that for you.
Professionalism demands that you do so.
*Not an exhaustive list.
**colloquially, they are leading you to water, not forcing you to drink.
Every aspect of human endeavor that involves teaching newcomers how to do something involves both didactic and practical experiences.
That is just the way it works.
Grant review is one of those things. Formal instruction only gets the job partially done. More learning takes place in the doing.
I just can't understand what is valuable about showing that a 1%ile difference in voted score leads to 2% difference in total citations of papers attributed to that grant award. All discussions of whether NIH peer review is working or broken center on the supposed failure to fund meritorious grants and the alleged funding of non-meritorious grants.
Please show me one PI that is upset that her 4%ile funded grant really deserved a 2%ile and that shows that peer review is horribly broken.
The real issue, how a grant overlooked by the system would fare *were it to be funded* is actually addressed to some extent by the graph on citations to clearly outlying grants funded by exception.
This is cast as Program rescuing those rare exception brilliant proposal. But again, how do we know the ones that Program fails to rescue wouldn't have performed well?
Two years after your paper is published in Journal of SocietyB send the citation report showing that it quadrupled the JIF of the JournalA that rejected it to the rejecting Editor.
Let's make this a thing, people.
Yesterday's review of the research publications of a person who had written about closing down his lab due to lack of funding in this "unfair" grant award environment touched a nerve on at least one Reader. I assume it was uncomfortable for many of you to read.
It was uncomfortable for me to write.
You can tell because I felt compelled to slip in the odd caveat about my own record. I can write one of those reviews about my own career that would be equally, if not more, critical and uncomfortable.
No doubt more than one of you got the feeling that if I wrote a similar review of your record you would come up wanting ...or at least PhysioProffe would jump in to tell you how shitasse you are*. Certainly at least one correspondent expressed this feeling.
But that tinge of anxiety, fear and possibly shame that you feel should tell you that it is a good idea to perform this little review of yourself now and again. Good to try to step outside of your usual excuses to yourself and see how your CV looks to the dispassionate observer who doesn't know anything about your career other than the publication and NIH-award (or other grants, as relevant) record.
Do you have obvious weaknesses? Too few publications? Too few first/last author (as appropriate). Too few collaborations? Insufficiently high Journal Impact Factor points? Etc.
What is all of this going to say to grant reviewers, hiring committees or promotions committees?
Then, this allows you to do something about it. You can't change the past but you can alter the course of your future.
In some situations, like crafting the NIH Biosketch Personal Statement, you do actually have the opportunity to alter the past....not the reality but certainly the perception of it. So that is another place where the review of your CV helps. That voice of excuse-making that arises? Leverage that. You DO have reasons for certain weaknesses and perhaps other features of your CV help to overcome that if they are just pointed out properly.
*he wouldn't, btw.
In his latest column at ASBMB Today, Steve McKnight attempts to further his assertion that peer review of NIH grants needs to be revamped so that more qualified reviewers are doing the deciding about what gets funded.
He starts off with a comment that further reveals his naivete and noobitude when it comes to these issues.
Reviewers judge the application using five criteria: significance, investigator, innovation, approach and environment. Although study sections may weigh the importance of these criteria to differing degrees, it seems to me that feasibility of success of the proposed research plan (approach) tends to dominate. I will endeavor to provide a quantitative assessment of this in next month’s essay.
The NIH, led by then-NIGMS Director Berg, already provided this assessment. Ages ago. Try to keep up. I mention this because it is becoming an obvious trend that McKnight (and, keep in mind, many of his co-travelers that don't reveal their ignorance quite so publicly) spouts off his ill-informed opinions without the benefit of the data that you, Dear Reader, have been grappling with for several years now .
As reported last month, 72 percent of reviewers serving the HHMI are members of the National Academy of Sciences. How do things compare at the NIH? Data kindly provided by the CSR indicate that there were 7,886 reviewers on its standing study sections in 2014. Evaluation of these data reveals the following:
48 out of 324 HHMI investigators (15 percent) participated in at least one study section meeting.
47 out of 488 NIH-funded NAS members (10 percent) participated in at least one study section meeting.
11 of these reviewers are both funded by HHMI and NAS members.
These 84 scientists constituted roughly 1.1 percent of the reviewer cadre utilized by the CSR.
This tells us nearly nothing of importance. How many investigators from other pertinent slices of the distribution serve? ASBMB members, for example? PIs from the top 20, 50, 100 funded Universities and Medical Schools? How many applications do NAS / HHMI investigators submit each year? In short, are they over- or under-represented in the NIH review system?
Anyway, why focus on these folks?
I have focused on the HHMI investigators and NAS members because it is straightforward to identify them and quantify their participation in the review process. It is my belief that HHMI investigators and NIH-funded members of the NAS are substantively accomplished. I readily admit that scientific accomplishment does not necessarily equate to effective capacity to review. I do, however, believe that a reasonable correlation exists between past scientific accomplishment and capacity to choose effectively between good and poor bets. This contention is open for debate and is — to me — of significant importance.
So confused. First, the supposed rationale that these elite scientists are readily discernible folks amongst a host of well qualified so that's why he has used them for his example, aka the Street Lamp excuse. Next we get a ready admission that his entire thesis he's been pursuing since the riff-raff column is flawed, followed immediately by a restatement of his position based on..."belief". While admitting it is open to debate.
So how has he moved the discussion forward? All that we have at this point is his continued assertion of his position. The data on study section participation do exactly nothing to address his point.
Third, it is clear that HHMI investigators and NIH-funded members of the NAS participate in study sections charged with the review of basic research to a far greater extent than clinical research. It is my belief that study sections involving HHMI investigators and NAS members benefit from the involvement of highly accomplished scientists. If that is correct, the quality of certain basic science study sections may be high.
Without additional information this could be an entirely circular argument. If HHMI and NAS folks are selected disproportionally for their pursuit of basic science (I believe they are, Professor McKnight. Shall you accept my "belief" as we are expected to credit yours? or perhaps should you have looked into this?) they of course they would be disproportioanlly on "basic" study sections. If only there were a clinically focused organization of elite good-old-backslappers-club folks to provide a suitable comparison of more clinically-focused scientists.
McKnight closes with this:
I assume that it is a common desire of our biomedical community that all sources of funding, be they private or public, find their way to the support of our most qualified scientists — irrespective of age, gender, ethnicity, geographical location or any other variable. In subsequent essays, I will offer ideas as to how the NIH system of grant award distribution might be altered to meet this goal.
Nope. We want the funding to go to the most important science. Within those constraints we want the funding to go to highly qualified scientists but we recognize that "the most qualified" is a fool's errand. Other factors come in to play. Such as "the most qualified who are not overloaded with other research projects at the moment". Or, "the most qualified who are not essentially carbon copies of the three other folks funded in similar research at the moment".
This is even before we get into the very thorny argument over qualifications and how we identify the "most" qualified for any particular purpose.
McKnight himself admits to this when he claims that there are lots of other qualified people but he selected HHMI/NAS out of mere convenience. I wonder if it will eventually trickle into his understanding that this mere convenience pollutes his entire thinking on this matter?