Your Grant in Review: Follow the Reviewers' Style Guide

Oct 27 2014 Published by under Grant Review, Grantsmanship, NIH Careerism, NIH funding

The NIH grant application has a tremendous amount of room for stylistic choice. No, I'm not talking about Georgia font again, nor your points-leaving choice to cite your references with numbers instead of author-date.

Within the dictated structure of Aims, Significance, Innovation, etc, there is a lot of freedom.

Where do I put the Preliminary Data now that there is no defined section? What comes first in the Approach- Aim 1? The Timeline? A bunch of additional rationale/background? Do you start every Aim with a brief Rationale and then list a bunch of Experiments? Which methods are "general" enough to put them at the end of Aim 3?

Do I include Future Directions?

What about discussion of Possible Pitfalls and Alternate Considerations and all that jazz?

Is the "Interpretation" for each Aim supposed to be an extensive tretise on results that you don't even have yet?

In all of this there is one certainty.

Ideally you are submitting multiple applications to a single study section over time. If not that, then you are likely submitting a revised version of an application that was not funded to the same study section that reviewed it in the first place. Study sections tend to have an evolved and transmissible culture that changes only slowly. There is a tendency for review to focus (overfocus, but there you have it) on certain structural expectations, in part as a way to be fair* to all the applications. There is a tendency for the study section to be the most comfortable with certain of these optional, stylistic features of a grant application included in juuuust the way that they expect.

So, and here is the certainty, if a summary statement suggests your application is deficient in one of these stylistic manners just suck it up and change your applications to that particular study section accordingly.

Is a Timeline silly when you've laid out a very simple and time-estimated set of experiments in a linear organization throughout the Aims? Perhaps. Is it idiotic to talk about alternatives when you conduct rapid, vertically ascending eleventy science and everything you propose right now is obsolete by the time Year 2 funds? Likely. Why do you need to lead the reviewers by the hand when your Rationale and experimental descriptions make it clear how the hypothesis will be tested and what it would mean? Because.

So when your summary statement suggests a stylistic variant that you wouldn't otherwise prefer...just do it.
__
Additional Your Grant in Review posts.

*If the section has beaten up several apps because they did not appropriately discuss the Possible Pitfalls, or include Future Directions, well, they have to do it for all the apps. So the tendency goes anyway.

60 responses so far

  • One thing you didn't point out is that an important way in which these kinds of cultural expectations become embedded is that study section members tend to get chosen from the ranks of successful applicants whose grants were reviewed by that same panel.

  • Andrew Su says:

    Also a great reason to apply for the Early Career Reviewer program (http://public.csr.nih.gov/ReviewerResources/BecomeAReviewer/ECR/Pages/default.aspx). I served a couple times as a reviewer on the study section that now very often reviews my R01 proposals... Incredibly helpful...

  • drugmonkey says:

    an important way in which these kinds of cultural expectations become embedded is that study section members tend to get chosen from the ranks of successful applicants whose grants were reviewed by that same panel.

    Absolutely. And they have been beaten into the mold by hard experience and tough knocks if they don't already orient that way. Perhaps not ideal but again, there you have it. It is culture and it is hard to see where a human activity *wouldn't* generate such things.

    Also a great reason to apply for the Early Career Reviewer program

    Absofreakinglutely. Although we've heard comment now and again that it is not always easy to get selected. So definitely ask one or more SROs, try to get POs to specifically recommend you to SROs...but my sense is that people do fail to get invited so don't take that as a personal insult.

  • Andrew Su says:

    I got invited to serve based on a recommendation to the SRO from my undergrad research adviser (who was a member of the committee at the time). So yeah, do the networking thing!

  • Ola says:

    This, to the eleventyonth degree.

    Can't tell you the number of times I get PO'd at a grant because it's missing some part I expect to see next. Hell, one I reviewed last week didn't even have aims (but maybe that's a bit extreme). I'd say around 60% of all grants I review are missing some critical part - letters of collaboration, vertebrate animals, resource sharing, multiple PI plans, human subjects plans. Clearly someone in the grants office is not reading this shittio before it goes out, and even more troubling someone in CSR is not vetting it either.

    I don't mind reference formats being all over the place, but a big problem I see continually is just not enough frickin references at all! In a 12 page R01 document, if you're not rocking 50-100 refs then I'm going to assume you just don't know "the literature" (shoot me DM!) If I see a grant with 30 refs, I think "amateur", and that's hard to recover from. BSDs get extra points deducted for pulling this shit. Same goes for the "I've been around forever so this biosketch formatting BS doesn't apply to me" (limit on # of papers listed, showing PMCIDs, etc). Not cool.

  • Grumble says:

    "Why do you need to lead the reviewers by the hand when your Rationale and experimental descriptions make it clear how the hypothesis will be tested and what it would mean?"

    Because that is just how you write a grant, and if you don't do it that way, you are an idiot. A reviewer with 10 grants to read by next Monday wants to be led by the hand, and if you make her read a section twice to understand it fully, you will really piss her the fuck off. You are NEVER going to bore a reviewer by saying things like "If we observe X, we will conclude Y" rather than just leaving it implicit in the experimental design. You are going to make the reviewer think, "this is really clear and explicit and focused and the PI knows what she's talking about and I like it."

    @Ola: Really? You ding for not showing PMCIDs? Really? Thanks for providing reason #6,032 why the system is broken.

  • Neuro-conservative says:

    @Ola - It is not your responsibility to be evaluating technical compliance with formatting requirements. You are on study section to evaluate science. If you are truly altering scores based on inference of BSD narcissism, you're doing it wrong.

    IIRC, you were also dinged on this board a while back for some other weird reviewing peccadillo. I think if you are that seriously burdened by the various chips on your shoulders, you should probably decline review assignments for a while.

  • E rook says:

    I spend a lot of time ensuring every page and every letter conform to the guidelines. For grant apps and manuscripts. I think it's insulting to the reviewer and the system to not follow them. If you don't like the PMCID rule, or the biosketch format rules, then you should advocate to change them (provide, you know, evidence and a rationale for your position). I think people should be dinged for this. What other corners are they cutting?

  • dmb says:

    Limit on # of papers? What about reading instructions? "Encourages to limit" != "forbids more than 15".
    "Selected Peer-reviewed Publication and Patent Citations. NIH encourages applicants to limit the list of selected peer-reviewed publications, manuscripts in press, and patent citations to no more than 15. Do not include manuscripts submitted or in preparation. The individual may choose to include selected publications based on recency, importance to the field, and/or relevance to the proposed research."

  • Ola says:

    Given the widespread truth that there's actually nothing to distinguish the quality of science in the top 20% of grants, damn right I'm gonna look for other reasons to ding a proposal! If the formatting is crap, that'll do.

  • Grumble says:

    I wonder if the CSR ever asks its reviewers in anonymous surveys something like, "What were the real criteria you used to review grants in this round?"

    With enough answers like Ola's and E's, even they would probably conclude that the system is utterly broken - that is, that scientific review isn't; it's review based on the random whims of reviewers.

    Or, more likely, they would bury the results and never think about it again.

  • Dr Becca says:

    E rook, are you seriously saying that you ding grants for scientific flaws you presume are being made based on whether or not someone gets the page margins right?

    That is not how this works.

  • Established PI says:

    Sadly, I am seeing comments on this page that reinforce McKnight's comments that many found so offensive. Talk about not seeing the forest for the trees. The study section is there to identify the best science, not to subject grant writers to hazing rituals. Do you really need a PMCID to find a paper for which the full reference has been given? And are you really pulling out your ruler to check the margins? Is anyone really dinging applicants who don't adhere to their favorite reference numbering/naming format? This is all quite depressing.

  • Joe says:

    The reason the formatting is important is that you are striving for clarity, for getting your message across and for no important information to be lost. Many times at study section I have heard a complaint that some essential step is not mentioned in the application, and then another reviewer says that they were sure that it was. Sometimes somebody can find it during the discussion, and other times not. I wonder how many times the info was there but the reviewers didn't find it. If you've buried how many mutants you are going to make in an experimental procedures section after aim 3, it is possible no one will read that.

    You will win points for clarity (including expected formatting) and you will sometimes lose points when it is difficult for the reviewer to find the info they need to evaluate the application.

  • DrugMonkey says:

    Dr Becca et alia:

    Whether the bean counting, font size obsessing, margin measurers are jerks or not realize this. They exist. Prepare your grant accordingly. For goodness sake it is the one part is this process that is entirely under your control.

  • Neuro-conservative says:

    DM - Am I correct in remembering a previous discussion with Ola about review behavior? If so, can you point me back to that thread?

  • drugmonkey says:

    Sorry, doesn't ring a bell. Probably you are right but....we have a lot of discussions of reviewer behavior.

  • Dr Becca says:

    Whether the bean counting, font size obsessing, margin measurers are jerks or not realize this. They exist. Prepare your grant accordingly.

    I didn't say I didn't follow formatting guidelines, I said that docking grants for not doing so is a problematic and logically flawed approach to peer review.

    My grants are fucking beautiful and clear as a crystal mountain spring.

  • drugmonkey says:

    Why is fairness "problematic and logically flawed"?

  • dmb says:

    Limit on # of papers? What about reading instructions? "Encourages to limit" != "forbids more than 15".
    "Selected Peer-reviewed Publication and Patent Citations. NIH encourages applicants to limit the list of selected peer-reviewed publications, manuscripts in press, and patent citations to no more than 15. "

  • iGrrrl says:

    Dinging grants for things like not following the directions/cultural norms is an easy way for reviewers to find a way to judge between two ideas of otherwise indistinguishable merit, with respect to the ideas. In fact, we use a direct quote from a review that says that the number of typographical and other errors led the reviewers to conclude that the same lack of attention to detail might also be brought to the research proposed. If you can't double check your grant application, do you think people assume you double check your study design, protocols, and how well these are carried out by lab personnel?

    Also, some of the things DM includes as "do you...?" items are, in fact, called for if you read all of the NIH instructions, these things are in there. "Discuss potential problems, alternative strategies, and benchmarks for success anticipated to achieve the aims." Potential problems aren't in the review criteria, but they are in the NIH SF424 instructions, and in the instructions to reviewers. (You all read the instructions to reviewers, right? and the Review Criteria? They give you the questions the reviewer is meant to answer. Hand them the answers!)

    And, dmb, whenever a federal agency 'encourages' something, I take it as a "Thou Shalt".

  • Established PI says:

    DM - Not sure what kind of fairness you are referring to. By all means dock the grants that are hard to follow because the PI does a lousy job of formatting, logical order, or whatever. And sure, if the grant is 13 pages and the limit is 12, they should be dinged. But picking through for somewhat sloppy errors that do not detract from the presentation is just mean, in the sense of smallness of mind. It suggests a frame of mind that is not focused on the science, but of a power-tripping reviewer intent on meting out punishment.

  • Dr Becca says:

    Why is fairness "problematic and logically flawed"?

    My issue is with E Rook's implication ("What other corners are they cutting?") that because the grant writer fails to follow formatting instructions, their science *must* be shoddy as well, even without evidence of that in, you know, the science part of the proposal.

  • dmb says:

    iGrrrl: You should not. There are reasons why this is worded that way, and there are situations when one should go over the "limit". Federal agencies do not have problems with defining limits. Limits for margins and fonts are defined, for # of publications are not.

  • E rook says:

    I have not reviewed grants. I have not successfully obtained an R01, but I have applied the last 4 cycles. I feel that if I can get every bean-counting thing within the specifications, then those investigators who do get funded should also. It would seem unjust to me otherwise, frankly. Because I could imagine a reviewer subconsciously dinging me for not even having the ability to follow instructions and therefore not have the ability to do the work. I also would HATE to see a "follow the guidelines, noob," critique... it'd be embarrassing to share with my peers and I'd probably die a little inside if the reason I can't do science is because I formatted a biosketch wrong. Very little is within our control and I think that intensity of effort and high value is given to things I can control. So I (personally) have not dinged for these things, but I think they should. It seems sloppy, and if sloppiness is in front of the reviewer, then the applicant does sloppy work (if you think of the grant as a piece of work). For manuscripts, I get exasperated, because they are typically the product of trainees and it pisses me off to get something that didn't follow the GD instructions. Maybe the system is beating assholery into me, I'll think on it. But I've said here before, standards should be high. (Yes, I have gotten the ruler out in my research plans before submitting, I know it's crazy, but it's at least within my ability to control it).

  • > My issue is with E Rook's implication ("What other corners are they cutting?") that because the grant writer fails to follow formatting instructions, their science *must* be shoddy as well, even without evidence of that in, you know, the science part of the proposal.

    I don't think sloppy grant applications (both in formatting and poor writing) *must* mean other aspects of the research are shoddy. But most scientific disciplines require a lot of careful attention to detail, and it's much, much easier to correctly format a grant application or a manuscript than to correctly carry out an experiment. When I see a manuscript littered with errors I really start to wonder. And you don't want your reviewers to even wonder.

  • I only care about this shittio if it makes it difficult for me to understand the proposed science.

  • Grumble says:

    "Littered with errors" (that make the grant hard to read, hard to understand, and/or just plain hard to look at) is fundamentally different from "didn't put PMCID numbers in the biosketch".

    The first affects your ability to understand what the applicant has written. By all means, it is fair to ding for that.

    The second affects nothing whatsoever. The PMCID requirement is a stupid bullshitty bureaucratic regulatory thing that has no bearing whatsoever on how good the proposed science is or whether the grant is worthy of funding at all. Therefore, failure to include PMCIDs is a completely meaningless oversight that, if the NIH decides it cares more than half a shit about it when it comes time to decide on funding, can be corrected simply by the PO asking the applicant for another biosketch.

    Guess what? If you fuck up your vertebrate animals section, (a) reviewers are not supposed to reduce your score for it, and (b) if the score is good enough and the grant is considered for funding, the PO will just ask for a corrected animals section. Missing PMCIDs and other minor failures of formatting and whatnot are far less serious than an incomplete animals section and can be dealt with by program staff in the same way.

    In most cases, these omissions don't reflect sloppiness so much as the fact that there are a million things to keep track of and applicants are only human. Every grant will have at least a few of these kinds of errors. So seriously, people, don't be petty. At least try to have a little honor and humanity when you hold someone's future in your hands.

  • Grumble says:

    Speaking of formatting errors...!

  • DrugMonkey says:

    You guys know the PMCID requirement is there to encourage funded investigators to fulfill their Open Access obligations, right? There IS a purpose here, even if not directly related to evaluating a Biosketch for content.

  • Jo says:

    I care about font size, but only because I think its an unfair advantage relative to the other applications (to use a smaller font and be able to explain yourself more fully).

    But docking any kind of point because someone's missed off the PMCIDs is insanity. If you really do that, I hope you're forthright about it in both the discussion and the summary statement. Just so the entire study section knows what a whackaloon you are and can discount your views accordingly.

  • Khat says:

    @Grumble -- The vertebrate animal section should be considered when determining overall score. See: NOT-OD-10-128.

    ...If one or more of the five required elements are not addressed, the application’s impact/priority score may be negatively affected. Because reviewers are asked to consider the VAS as an additional review criterion in the determination of scientific and technical merit for each application that proposes the use of vertebrate animals, the impact/priority score may be affected when scientific questions related to the proposed animal model(s) arise.

    Agree about the ridiculous of PMCID's

  • DrugMonkey says:

    Dr Becca- sensible or not there are people who are going to knock you for sloppiness. And they will frame it as "indicates carelessness about her science" in discussion if not the written critique.

    Reviewers vary.

    Sloppiness and execution is the one thing that is totally under your control, so why not proof it?

    E-rook- I hear you. But when you start reviewing your head will explode about sloppy, StockCritiqueViolating proposals that get good scores. You can either fume or use this information to hone your own grant strategy and crafting.

    I'm a grant reviewer that doesn't sweat the small stuff when it is detrimental to the PI's case. I might get into the bean counting if it is an attempt to advantage the app unfairly but typos? Meh. There is so much to work with in evaluating an app I never see where typos help much.

  • jmz4 says:

    I get why NIH has the PMCIDs in there, but make sure that the papers actually have one before you start dinging them for not including it.
    For instance, I applied for a K99, and referenced a paper (in my biosketch and other sections) from grad school. My boss from then has not complied with the open access policies he's supposed to (despite several authors from the paper asking him to), and so it doesn't have PMCID (even though it should, and I think will, once the two-year mark roles around).

  • E rook says:

    I do think that failure to follow instructions or other sloppiness (say, copy/paste from a different doc into HS section, so procedures or N's don't match the research plan and such), reflect a lack of attention to detail, care, precision, (maybe infer arrogance, like the jerk who double-parks their Mercedes). I do think these things matter in how science is performed, so in aggregate, should affect a grant's score. I don't think it's unreasonable. From my position of insecurity, these are things I can control and it baffles me why anyone else would let them slide.

  • E rook says:

    I'm pretty sure that any author can upload a paper into the PMC system. I've done it and cc'ed the senior author. They replied with a thanks. It literally takes 15 minutes.

  • drugmonkey says:

    but make sure that the papers actually have one before you start dinging them for not including it.
    For instance, I applied for a K99, and referenced a paper (in my biosketch and other sections) from grad school. My boss from then has not complied with the open access policies he's supposed to (despite several authors from the paper asking him to), and so it doesn't have PMCID (even though it should, and I think will, once the two-year mark roles around).

    This is the whole frigging point dude! It is to provide sufficient motivation so as to get PIs to comply with the mandate to deposit papers in PMC.

    I do think these things matter in how science is performed, so in aggregate, should affect a grant's score. I don't think it's unreasonable.

    Right but you are just making this up. You are assuming that a few typos or cut/paste failures in a grant application mean that a PI will be sloppy with her lab work. You don't really have any basis for this, especially when it comes to making a decision about one particular PI on the basis of one particular grant application.

    and arrogance? sorry but that is highly correlated with what this world thinks of as the very best scientists. it's just a plain fact.

    so yeah, you are being unreasonable. You are exhibiting a clear personal bias about matters of inferred personality bias your anticipated viewpoint on a grant application. ...don't worry, you have plenty of company in this. Still, it isn't reasonable just because it is your opinion.

  • iGrrrl says:

    Y'know, I think I violated my own rules and buried my biggest point, so I'll expand here. It's hyper competitive these days. Good ideas rarely sell themselves, and putting a good idea in a sloppy package makes the sale harder. If you can control externals that trigger intrinsic negative reactions, and avoid triggering those reactions, why not do it? It's easy to do well, and easy to do badly. Follow the directions and make it easy to read, and don't hand reviewers anything they can StockCritique ™ you down with. Make them focus on the science.

  • E rook says:

    I can be moved on this. I will reflect on it. But my knee jerk response is that if someone presents you with a piece of work that is sloppy, then you have evidence right in front of you that they do sloppy work (or don't care, aren't careful, aren't prepared, etc etc). I'm coming from my relative position of weakness. I can't control that a reviewer missed my Power analysis, or got other ridiculous things wrong, or thinks my past training environment was crap, I don't have a million dollars to produce a CNS paper between now and the next cycle, but I can control these other things. As far as others go, I can't help but just it when I see it. Maybe that's why I get so righteously indignant when a reviewer makes a mistake (seems to be happening less frequently) because I've worked so hard to create a more perfect document for them to review.

  • anonymous postdoc (shrewshrew) says:

    For Ola and E rook:

    "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood."

    Perhaps Emerson overstated the case a bit towards the end there. However, if I found out that someone had taken out a ruler to ensure their margins were precise on a grant I was reviewing, I would not be shocked to discover little to no true innovation within the proposal itself. This obviously reflects my "clear personal bias about matters of inferred personality" on the basis of a hypothetical grant, but it's worth pointing out that it cuts both ways, and not always to the benefit of the person at the high end of the conscientiousness spectrum.

    Aren't grants are a numbers game? If so, I would rather send out 10 innovative ideas with missing PMCIDs than 5 perfectly formatted beauties. Diminishing returns.

  • Grumble says:

    @Khat: I stand corrected. But note the wording of the reviewer instructions with regards to the vertebrate animals section: "the impact/priority score may be affected when scientific questions related to the proposed animal model(s) arise." In other words, a poorly formatted VAS that still gets the point across doesn't raise "scientific questions" so it shouldn't affect the score.

  • Curiosity says:

    I'm with those expressing disappointment with gleeful dinging on somewhat arbitrary minutia. I also acknowledge that finding *anything* to criticize is helpful for reviewers to move forward with evaluating their pile. My hope would be that rather than StockCritiques, reviewers were honest and transparent about such criteria: If you have a predetermined number of references required to indicate scholarship, you ought to put it in the comments, "Fewer than 50 references indicates lack of serious scholarship." If reviewers feel some cowardice in writing such specific critiques, I think it indicates that they too sense such criticisms are ludicrous.

  • damit says:

    How many times must this kind of absurd discussion about reference formats and other nonscorable issues come up?

    But Curiousity makes a great point...

    Ola and DM....how about next time you ad hoc someplace, you specifically outline in your critique how you consider number and format of references, and providing PMID numbers to be factors in your scoring.

    Hope you enjoy the (much deserved) public reprimand and lecture from the SRO....and better yet you will have lots of free time in the future, b/c you won't be invited back.

  • iGrrrl says:

    I hesitate somewhat to write this, because I'm arguing from authority in some ways. A lot of the people upthread have scorned the idea that you can infer something about the applicant (arrogance, carelessness) by how they deal with the non-science parts of their grant applications (following directions, typos, clear organization, etc.). I have probably read more grant applications to more different agencies than most people commenting on this thread, even long-time reviewers. I usually also meet with, or at least talk and correspond with, the applicants.

    IME, there is a correlation. I wouldn't say the correlation was perfect, and my perceptions might be attributable to confirmation bias, since I haven't started a spreadsheet to track it empirically. But an NIH biosketch with a personal statement that says, "Yeah, I'm that guy," in two sentences in Times New Roman tells me that this person doesn't think they have to take any requirements seriously. (I would not want to do hazardous work in that lab, for one thing. Plus, if they ignore my simple instructions completely, I won't work with them because I do know from hard experience that they will be unlikely to listen to my advice on the grantsmanship, anyway; I won't waste my time and the institution's money.) If there are formatting errors, but the science is clear and compelling, their arrogance isn't my problem, and they probably don't need me anyway. ;->

    That said, sometimes digging into an incoherently written Approach with a client reveals that the applicant really hasn't thought through how these experiments might actually work. ("Do you realize that when you put together all these random parts you describe that you'll generate 600-plus samples? Can you actually process that many for [expensive technique]?" "Oh. I hadn't thought about that." "You also haven't explained why you need all those conditions." "Oh. Maybe I don't…") And these kinds of people also often have formatting errors, and IME it often correlates to not paying attention to the relevant details, as well as the ones that many of you deem irrelevant.

    Also, Curiosity, I don't think the dinging on admin details is 'gleeful'. It's more of a relief for some reviewers that there is something they can use to discriminate among really good ideas when there are tiny pay lines.

    And Anonymous Postdoc (shrewshrew), I sincerely doubt that applicants who pay attention to these details results only submit mediocre ideas. I would never argue that people should put more effort into the beans than the meat! But between two great ideas, one that is easy to read and understand and where the applicant showed some consideration to the requirements, and another that is sloppily presented both in coherence and in format, and ignored all requirements, which do you think reviewers even want to read?

  • jmz4 says:

    "This is the whole frigging point dude! It is to provide sufficient motivation so as to get PIs to comply with the mandate to deposit papers in PMC. "
    -I know, but if my former PI hasn't done that, the paper doesn't have a PMC ID, through no fault of my own, so it might look like I was leaving it off. If what E Rook says is right, and I can submit it myself (the language on the site is a little vague, it says "PI or the author", not "an author") then its a problem solved. I was told by a couple departmental admins(who may have been misinformed) that the PI had to submit it to PMC if its at a journal where the publishers won't do it for you (e.g, option C and D on the PMC submission page).

    Anyway, I think its taken for granted when you're writing grants that sloppiness in any way is going to cost you. If you've ever been in a position where you have to score a hopeless number of applications of similar quality for a relatively tiny number of open slots, then you're familiar with minor flaws gaining outsized importance. This isn't necessarily driven by malice or neurotic compulsion, but just by the need to differentiate between excellent candidates. If you think the grant reviewers have it tough, talk to some college admissions officers.

  • drugmonkey says:

    Ola and DM....how about next time you ad hoc someplace, you specifically outline in your critique how you consider number and format of references, and providing PMID numbers to be factors in your scoring.

    When did I say I did this, pray tell?

  • drugmonkey says:

    gleeful dinging on somewhat arbitrary minutia

    You need to listen better. People really do think that they have a justification. In this thread someone laid out the "sloppy preparation means sloppy science" assertion. They believe this. It is not "arbitrary minutia", it is a judgment of a reviewer that you don't happen to agree with. It is really not much different than a reviewer that really believes that it is better to give yet another grant to Dr. BleuHare than it is to "risk" it on untried Assistant Professor Yun Gun.

    Just like another person said that a short reference list indicates in their opinion that the PI must have given short shrift to the scholarship in the whole thing.

    I get plenty ranty on this blog about reviewer tendencies that I don't happen to agree with. I get where you are coming from. But our first goal here is to try to help people be more successful. And understanding where different reviewers are coming from assists with that process. My thought is that ranting about how various reviewer hobby horses are WRONG and they SHOULDN'T DO THAT is inferior to understanding them and writing your applications with as many common(ish) reviewer variants in mind as possible.

  • drugmonkey says:

    Also, Curiosity, I don't think the dinging on admin details is 'gleeful'. It's more of a relief for some reviewers that there is something they can use to discriminate among really good ideas when there are tiny pay lines.

    I think it is also a huge leap in logic to think that an excess of typographical errors really makes all the difference in the world between different major categories of review outcome (i.e., triage/discussed, funded/not funded, etc).

    You get three reviewers, typically. What are the odds that all three are the same variety of bean counting minutia-mavens? Poor to nonexistent, I would say. Same thing for people who think the graphs are too small to read, that complain about the figure legend font size, that seize on the lack of a diagrammed timeline, that check grant attributions on the progress report papers, etc.

    I can't think of a single time when all three reviewers (of my proposals or ones that I am reviewing) have expressed the same hobby horse riding over minutia. [Except when it was such a Mac/Word figure-placement fail that you couldn't match figures to legends or something (that was back many, many Word versions ago, I think most people have worked their problems out by now). ]

    And let me tell you, if this comes up in the discussion of an application it...well, the hobby horse gets an airing. And the panel either rolls their collective eyes and ignores the reviewer..or they nod along because they agree. And if a large number of panel members agree.....well, maybe it is not such a ridiculous concern after all?

  • iGrrrl says:

    And that's the thing, DM. These issues provide reviewers with ways to split hairs.

    jmz4, you put at the end of the citation "PMCID - in progress"

  • Philapodia says:

    Doesn't it simply come down to the fact that reviewers are human? Humans make mistakes, they may be rushed when reviewing grants and not see your important data, or they may have biases against sloppy writing. Sloppy writing pisses me off, but I try to see the big picture when I review. On the whole, though, most of us try to do our best in a tough situation that we know sucks. Simply follow the rules (they're really not that hard), don't write crap, and let the chips fall where they may. Also start working on your next grant as soon at the first is submitted.

    iGrrrl: I've also used PMCID in progress and have never had any issues with it either.

  • E rook says:

    In my group, there is a support personnel (part of) whose job is to ensure OA compliance. We can just forward the docs and ask that it be handled and follow up to mak sure it's done correctly (Imexperience, always is) along withe a sincere note of appreciation. This person's support can be federally funded since this part of the job is federally mandated. This person also handles other federally mandated reporting requirements, etc.

  • Curiosity says:

    Clearly any judgement of mine on reviewer criteria -- minutia or otherwise -- is pretty much impotent arm-chair quarterbacking. I do find the original post and the discussion to be quite helpful in illuminating to applicants how grants are received by reviewers trying their best. I would add, though, that the discussion veered in an interesting way toward reviewers reflecting on best practices, which is where I directed my original comment on reviewers 'owning' their idiosyncrasies. It's good to know that these are aired and averaged out in committee. I really appreciate the daylight on these topics!

  • Anon_noob says:

    DM mentions in his post that this "hazing" ritual is almost envitable - all study section (SS; no pun intended) members have been pushed around by a previous SS of margin nazis (scratch that) fundamentalists [or some category of grantsmanship fundamentalists] so this culturism is bound to happen. Practically, perhaps this is true. But I feel this is absolutely unhealthy. This culturism is not just superficial shit like PMCIDs and margins, but really the SS members dictate what kind of science is done - if there is persistent bias for or against a particular kind of science, that is bad. The NIH SROs should be actively trying to constitute panels that avoid this bias. This means deliberately excluding the same set of people "standing members" reviewing grants for ~5 years (which is what they do currently - why?) Does the NIH have it backwards? They should ideally be inviting a diverse panel of NIH grantees and non-grantees to deliberately make sure no such culturism sets in and ideas are reviewed openly without a small group of members deciding for half a decade or more who to opens doors of the club to. I should clarify that I don't say this out of personal experience with a particular SS (my experience has been as positive as one can expect given the current circumstances, and I am a realist so I am doing what I can to fit into the "culture") and recognize that most individuals do their best and try to be fair, and that the issue I'm talking about is with the "system" as opposed to individuals.

  • drugmonkey says:

    This culturism is not just superficial shit like PMCIDs and margins, but really the SS members dictate what kind of science is done - if there is persistent bias for or against a particular kind of science, that is bad.

    Sure. But study sections evolve over time and usually there are other options that have different sets of expectations and biases.

    The NIH SROs should be actively trying to constitute panels that avoid this bias. This means deliberately excluding the same set of people "standing members" reviewing grants for ~5 years

    huh? Are you objecting to a given reviewer serving more than once on a panel? Don't you think applicants would howl even harder than they do already about "moving review target"?

    They should ideally be inviting a diverse panel of NIH grantees and non-grantees to deliberately make sure no such culturism sets in

    So how should people be trained in how to review grants? Should they all just come to the table with their various lacks of experience and have at it? Could work. Note that I am on the record repeatedly suggesting there is no reason to limit grant review participation to those that have already won grants.

    ideas are reviewed openly without a small group of members deciding for half a decade or more who to opens doors of the club to.

    As a minor correction, reviewers are staggered in their service. So this "small group....half a decade" stuff obscures the fact that a given set of individuals does not serve together en masse for four-six years. Last I checked, reviewers could choose the old appointment (4 years, supposed to participate every round) or the new way (6 years, only two rounds per year) which throws even more disruption into the concept of the same group of gatekeepers. So I can't say exactly what fraction rotates off every year--it used to be 25%, I suppose and now is somewhere between that and 17%.

    The point is that a study section is constantly evolving. Not changing over night into something radically different. Just....evolving. Individuals do, by dint of repeated soapboxing and haranguing, have detectable effects in shifting the discussion. Having one particular study section of interest that I've been submitting grants to for greater than a decade, and served on, I have seen evolution in what hobby horse issues come up as frequent comments (and might therefore reflect culture, not individual behavior). I would be surprised to my bones if this section was unique in this.

  • mlsphd says:

    I just came across this blog and it's great!

    I have a naive question... When you refer to a " points-leaving choice to cite your references with numbers instead of author-date," why is that strategy a "points-leaving choice?"

    Don't you lose out on so much opportunity to explain details of your grant by using author-date, rather than numbers throughout your application? Author-date also seems less readable.

    Thanks for your input!

  • drugmonkey says:

    Author-date also seems less readable.

    That is 100% incorrect. Numbered references require a lot of flipping back and forth to determine what data are being marshaled in support of a given point. With Author-date, there is the possibility that it sticks in the reviewers' mind a bit longer, requiring less flippage.

    Reviewers are notoriously pressed for time and attention when reading grants, thus anything you do that makes it harder for them to read and apprehend your proposal is leaving potential points on the table.

    So don't do that.

    additional commentary from the peanut gallery: http://drugmonkey.scientopia.org/2013/01/30/grantrant-x/

  • Marc says:

    Thanks for the quick reply and link to the peanut gallery, drugmonkey!

    Based on the link you sent, it seems like there isn't a consensus on this, but I appreciate the input and look forward to reading more!

  • E rook says:

    I don't know if this matters, but there's the added bonus of your or your collaborators' last names showing up in the research plan to remind reviewers without having to write ("as our group published" etc) that you did a lot of the work supporting your proposal. Or if it's your trainees' names, the pub is listed in your biosketch with their last name first. Not sure if this strategy works, but seems like it's better at putting your work in context than a bunch of arbitrary numbers. Also the dates could show a logical progression of the science better than numbers, this seems important for the first few paragraphs.

  • so much opportunity to explain details of your grant

    If your goal in writing a grant application is to stuff in as much detail as possible, you are already fucked.

  • […] to the stylin', profilin', bicycle riding, fig-flying, election stealing, wheelin' and dealin' son of a gun I want […]

Leave a Reply