Your Grant in Review: Future Directions

Mar 17 2015 Published by under Grant Review, Grantsmanship

One of the most perplexing thing I have learned about the review of 5 year R01 NIH grant proposals is a species of reviewer that is obsessed with Future Directions.

It was a revelation to me in one of my first few study section meetings that some reviewers really want to see extensive comment on where the project might be heading after the completion of 5 years of work. As in, a whole subheaded paragraph at the end of the Research Plan. This is insane to me.

Right?

For the most part, we all recognize that ongoing results in your own lab and in the field at large are going to dictate what is important to pursue five years from now. So speculation about what is coming next is silly.

And especially when I was a relatively inexperienced grant writer who had been getting beat up for "over ambitious" plans contained in a single 5 year plan, well.... I was amazed that people wanted to see even more in a speculative, hand wavey paragraph.

Consequently, I struggle with this. But I have tried to include something about Future Directions in my proposals. Yes, even now that we have only 12 precious pages to describe the actual plans for the current proposal.

I have recently seen a summary statement that describes insufficient attention paid to the Future Directions as the "primary weakness" of the proposal. I cannot even imagine what this reviewer was thinking. How can this be the primary weakness? Unless there is literally nothing else to complain about. And we know that never happens.

70 responses so far

  • E rook says:

    Riff raff reviewer, amiright?

  • Namnezia says:

    I think if you structure your significance statement correctly, the future directions become obvious. So if you say "if we complete the experiments in this proposal, we will answer problem X. Solving problem X is important because it has so far impeded us in figuring out even bigger problem Y. So if we solve X, we can then progress towards Y."

    So in a sense by explaining WHY you want to do the experiments in your proposal, it puts it in a greater context. One way to think about this criticism about "insufficient attention to future directions" is that the reviewer may have thought "ok, these experiments are cool, but if they are successful, THEN what? Where does that leave us?" So it may have been a comment more about the significance, than the lack of advance planning by the PI.

    The other interpretation is that the reviewer was just crazy.

  • Jim Woodgett says:

    If you ask me, our funding agencies should be required to tell you what their future plans are. The way things have been trending, as well as various changes in policies, it's ridiculous to expect an applicant to explain what the heck they intend to do IF the current application makes the cut and IF the research environment hasn't substantially changed in 5 or more years time

    The place to put the proposed research into context is at the beginning, in setting the scene for why the research plan is important and timely. The future directions section may as well be science fiction. "Should Mars One turn out not to be a galactic Ponzi scheme, I expect the astronauts will appreciate the 25% reduction in expelled gases achieved through my improved strain of brussel sprouts - as might elevator dwellers".

    The insidious nature of expected subsections is that applicants are expected to write something intelligible, whether it is appropriate or not and that this another cross to die on, whether appropriate or not.

  • drugmonkey says:

    That seems a useful rubric to reconsider one's writing, for sure Namnezia. Good point.

    (but from my experience with what reviewers are saying in study section, I am indeed talking about a formal "Future Directions" section in a very direct and specific sense)

  • Dr Becca says:

    I can see it for R21s, which are supposed to lead to an R01 and set the stage for bigger things, but for R01s that just seems crazy.

  • Philapodia says:

    Future Directions: We don't have a frickkin crystal ball that can read the future, and if we did we'd use it to play Powerball (which has about even chance for winning than submitting grants but less work). We'll use this money to do the stuff in the proposal, and we'll also use it to fund some other smaller tangentially-related projects to get preliminary data so we submit more proposals in the future to hedge our bets.

  • qaz says:

    This is a holdover from when an R01 was assumed to be renewable, when it was part of a long-term research plan. This assumption (though invalid at the time) was still present in study section when I first got there, and was a regular expectation, particularly of new investigators.

    Personally, I find the future directions a useful place to put the "experiments we'll get to if we have time" and the "experiments we wish we could do but don't have preliminary data for yet". I find that a good future directions section can undercut some nasty "why don't you do X?" comments.

  • sopscientist says:

    I'm with qaz about this being a good place to head off inevitable reviewer angst about me not having included their favorite next experiment. Yes! I did indeed think of that experiment! and yes I might get to them someday........

  • Pinko Punko says:

    I think this is a much bigger deal in the basic study sections. They act like every grant is a potential 20 year project, and if not that, then it must clearly lack significance. I know this can't be the case for other panels because you see people that make their careers iteratively on one and done R01s. This just is a different culture from the basic sections. For Aims that are not 5 years worth of work you should have some idea what is next, but something really has to change because that is not a realistic bar.

  • drugmonkey says:

    My experiences are not in exclusively basic science type sections.

  • Pinko Punko says:

    DM, I know that but I am just suspecting if there is a continuum that those would be on one side. The panels with which I am familiar seems to really expect what you are talking about and it is usually brought up like the comment above. The idea that a discrete set of goals are somehow less meaningful than establishing a good mine project. I can see some value to that meaning they might be asking if you are really maximizing impact. If you can explain exactly how something is important and where the field will go then they might feel you are the right person to do the wotk because you will explain it to the world. At some level it can also be kneejerk stick critique and that is a problem.

  • drugmonkey says:

    I'm thinking qaz is on to something key though. old school thinking.

  • yikes says:

    I agree with qaz & sopscientist. I try to use it to mention (briefly) some key semi-obvious experiments that for one reason another seem too ambitious for the proposal.

  • Cynric says:

    For the most part, we all recognize that ongoing results in your own lab and in the field at large are going to dictate what is important to pursue five years from now. So speculation about what is coming next is silly.

    The UK research councils go one better than this, and require a (up to two page) report on the potential long-term socioeconomic "impact" of the research project (not the field, the specific 3-5 year project), along with a plan of how the investigators intend to realise this potential.

    In my last application I embraced this as a creative writing exercise. The feedback from the panel was that my impact statement seemed "rather far-fetched".

  • physioprof says:

    In many years of submitting and reviewing R01s, I've never seen this criticism.

  • drugmonkey says:

    And to what do you attribute that?

  • Grumble says:

    I haven't seen this criticism yet, either. That, of course, is because my subfield attracts inherently more intelligent people than yours, hence the quality of reviewers is better. (Gosh, I wish that were true.)

    Annnnyway, as to this from qaz: "I find that a good future directions section can undercut some nasty 'why don't you do X?' comments." Really? If the critical experiment - the one that gets to the heart of the hypothesis - is not formally proposed and described, but just tossed in the Future Directions salad, then my reaction as a reviewer would hardly be positive. And if the experiment isn't critical, well, why is it a positive thing to say you might do some unimportant thing in the future?

    Future Directions sections are usually ignored, and that's a good thing. They are pointless and will get you into more trouble than they are worth.

  • qaz says:

    Grumble - In my field, we tend to do more exploring and less testing of specific test-with-a-single-experiment hypotheses. This means that what we are doing is exploring an aspect of a basic question and there are a LOT of experiments one can do to move the field forward. Study sections are more looking to see if you're on the right track than that you have found the right specific experiment to answer specific question Q.

    As Yikes says, future directions is a place to say 'yes, we've thought about this super ambitious experiment, but we don't have new sexy/snazzy technique T working in our laboratory yet, so we don't dare propose it as an aim, but maybe in five years it'll be doable, and I'll bet we find really cool results if we did it.'

    By the way, hypothesis testing is vastly over-rated. What most grants are really doing (if you look at them as a whole) is exploring a space of interesting possibilities. The idea that there is a "critical" experiment to test a specific hypothesis is not really a good description of how science moves forward in my experience. By the time you can do the "critical" experiment, enough evidence has arrived from lots of different sources to convince people one way or the other.

  • MoBio says:

    I would never make a negative comment about 'future directions' and it is quite surprising to hear this (although I've learned never to be completely astonished)...that being said what I find this section useful for is...

    those grants which have so much preliminary data that it looks like everything is done (more frequent now than in the past)--it gives me a good sense of what are the real experiments they will do with the $$ if they receive it.

  • drugmonkey says:

    Aaaah. That could be the problem in this particular case.

  • Pinko Punko says:

    In my field, where the critique can be legitimate is when the proposed experiments are a data generation exercise there is a lot of work to generate the data but there can be comments about what to do with the data- reviewers might refer to that as "future direction" because it is in the future but in reality it is a critical aspect of what is proposed. It also goes to perceptions of significance. If underpants gnomes propose to steal underpants, and step two is ?????? that is where future directions would go because step three is profit for the field or society.

  • drugmonkey says:

    Question: if one needs to get a sense of where the PI would head after the first 5 years, what does this mean for cries for "person not project" review? If the person can't be trusted to propose some more cool stuff after this current cool stuff has been accomplished....

  • Grumble says:

    " hypothesis testing is vastly over-rated"
    "[the future directions section] gives me a good sense of what are the real experiments they will do with the $$ if they receive it."

    Those comments illustrate nicely how completely fucked up and broken the whole grant system is.

    " If the person can't be trusted to propose some more cool stuff after this current cool stuff has been accomplished...."

    In a person-not-project review, the proposed research would look a lot like a future directions section (a page at most, brief outline, few specifics) and it would NOT be the main basis on which the review is based.

  • And to what do you attribute that?

    Fucke if I know!

  • newbie PI says:

    From qaz: "Personally, I find the future directions a useful place to put the "experiments we'll get to if we have time" and the "experiments we wish we could do but don't have preliminary data for yet". I find that a good future directions section can undercut some nasty "why don't you do X?" comments."

    I also use this section to mention the experiments that I really want to do and actually intend to do if I get the money (but that I can't demonstrate adequate enough feasibility to list as part of the actual aim). It is pretty effed up, that we write grants so as to avoid stock critiques rather than writing what we will really do with the money.

  • It is pretty effed up, that we write grants so as to avoid stock critiques rather than writing what we will really do with the money.

    Dunno about the "rather than", but there is a very good reason we don't write "what we will really do with the money": because when it comes to basic science grant applications seeking five years of support, other than the first experiments, we don't know what we will really do with the money.

  • drugmonkey says:

    Writing *only* to avoid SC bait leaves one in a defensive crouch and is to be done.....judiciously.

  • drugmonkey says:

    But when you write a proposal that gets a score that was fundable two decades ago (I.e., it was a decent app) and all you see in the comments are Stock Criticisms... Well, it is hard not to focus on avoiding those. You have the other part handled.

  • E rook says:

    My most recent future directions section was something like: we designed these experiments with my lab's ultimate goal of a therapeutic X. Which we can test in our clinical research center but this project is required to answer x y z questions first. Co-I's main interest is this off-shoot of the project described here and data from this project will also support that direction. The experimental data gathered will serve as a platform for the PD Scholar to further develop and test her novel informatics platforms which will be long term useful to the field and help to launch her independence. Really all these points were spelled out before but (I think) it has to be under this section heading to avoid a stock critique.

  • E rook says:

    My reasoning is to give the reviewer something they can copy/paste as bullet points in the Strengths on their review, and a sound bite they can circle and hopefully use if it comes up in discussion.

  • rxnm says:

    Trying to be SC-proof is a waste of time. First, it can't be done, because the best SCs are constructed so that they can cut either way (too much / not enough, etc). Second, SCs are not deployed because you left yourself open to them, they are deployed because someone wants to down your grant but is too lazy or stupid to think of real reasons. Grants that are nothing but big steaming piles of SC bait get funded all the time, for all the usual reasons.

  • drugmonkey says:

    My reasoning is to give the reviewer something they can copy/paste as bullet points in the Strengths on their review, and a sound bite they can circle and hopefully use if it comes up in discussion.

    mine too.

  • Pinko Punko says:

    E Rook has it. "Here I helpfully provide the bullet points for review, should you prefer to ctrl-c/v"

    I don't think the system is entirely broken like Grumble. I do think that SROs exhorting their reviewers to avoid stock critiques or better justify their scores would be a huge benefit to the process. I sort of feel panels have a fraction of really smart, really excellent reviewers, and an increasing fraction of those that take the approach to only put effort in on 1-2 proposals, while larding the rest with SCs™. That is a downer.

  • Dave says:

    But when you write a proposal that gets a score that was fundable two decades ago

    What do you mean? It's always been this way, remember? The NIH goes through good times and bad times 😉

  • jmz4 says:

    My first foray into the *real* grant writing world has been eye opening. I wrote a K99, which was well scored on the initial submission ( a few points below the payline), and *all* I got were stock criticisms on career development plans and future directions. I had absolutely no comments on the science, which was disappointing, because I know there were some pretty decent sized holes, conveniently ignored papers, and some lightly tortured logic in the proposal.

    But yeah, I ran into the same problem there. I was circumspect in describing my future goals, because that's after the mentored phase, and then after 3 years of independent work, so it just seemed incredibly silly to put down specifics about how I see all this research playing out. Especially so after warned of being overambitious in your proposal. So I went vague and got criticized for it. So in the resubmission I just used the section to summarize the stated aims and point out a couple ways to extend on them.

    It is also notable how different reviewers think the same section should contain different things.

  • Philapodia says:

    If you can write a proposal that grabs the reviewers imagination and get them excited, stock critiques are less likely to happen. Grants are about sales(man/woman)ship and getting your customer to care (ie emotionally invest) about your product. Just like in sales, you need to think about what your customer wants and how your product will give it to them. When I hear a reviewer at study section say "this is a really cool idea!", then I usually hear a lot less of the SC complaints about the PI not publishing enough, the right controls not being in place, etc. When the idea is presented in a boring way, the SC comes out and you know it's sunk.

    At least in the study section I review for the SRO ranks each of the applications based on the score and we discuss from best to worst to the triage line. This means that the "best" grants are discussed when we're fresh and chipper, and as we go through the day and our lack of coffee/muffins are making us grumpy we're talking about the "worse" grants which probably affects the discussion. Would randomizing the order of discussion make the process more fair?

  • Grumble says:

    "SCs are not deployed because you left yourself open to them, they are deployed because someone wants to down your grant but is too lazy or stupid to think of real reasons"

    You see, Pinko? Totally, irredeemably fucked up and broken beyond repair. And that's only reason #3 so far.

  • Pinko Punko says:

    I don't think a system is irredeemable if you can really just ask reviewers to bring up the average effort. Discussing more grants might do it. Seems like people don't want to embarrass themselves. Or just having chair rescue more grants. None of this will happen.

    It is sad. People are tired. They don't want to do it.

  • Philapodia says:

    "if you can really just ask reviewers to bring up the average effort. Discussing more grants might do it."

    Holy fukk, are you kidding? I just was just on study section where we discussed 25 grants in about 10 hours. At the end I had a hard time focusing on what we were discussing and everyone was exhausted. And this was a relatively short meeting. Want to discuss more grants, you're going to need more study sections which costs $$$.

    I agree it's sad and people are tired. The whole process is like Russian Roulette without the warm and fuzzy feelings. But if you want to do science here you have to play the game...

  • Eli Rabett says:

    For NIH: If we accomplish X the implications for human health are Y. Accomplishing X will drive research by others in more applied areas to . . .which have the following potential outcomes.

    NIH reserach by definition is applied research at some level

  • E rook says:

    Well, I picked the strategy from reading this blog, so I'm just repeating what DM has written over the years.

  • Pinko Punko says:

    Philapodia- how soon we forget. When paylines were 35%, do you think only 40% of grants were discussed? Grants used to be 25 pages and they were read. And many of them were discussed. Study sections that now get 80 grants would get 120, as far as I am aware.

    Very few reviewers will read more than their pile these days, and I know some will specifically say they only give their attention to 2-3 that they feel might be worth giving the effort. Some commenters on this blog say very much the same- that this is their strategy. I recognize why they might say that, but come on. I recognize that people get tired, but 10 hours talking- that is nothing. It is emotionally and mentally much worse giving that decisions are people's careers in the balance, but the effort is sort of trivial compared to supposedly all the work going in to the reviews. If that were not so, I don't think 10-20 hours of kicking around science would be considered as exhausting. We routinely handle 5 day meetings that go from 8:30 am to 10 pm an then a couple of hours or more at the bar.

    I would be ashamed to give an empty stock critique of a proposal, and I know many people are very much the say way. This being said, I recognize that there are some that would not be ashamed. I just wish there was a little bit more oversight or peer pressure to generate some shaming. There is no point rescuing a grant that is way below the line, but it certainly is depressing to see critiques that are lazy or worse factually wrong or sort of incoherent. There should be a safety net for that, but nobody will rescue anymore because it isn't deemed worth it. eroding standards in that regard seeps into the system. Certainly I've read enough 3rd reviewer critiques that seem like they've been designed to blow with the wind based on which way the first two critiques go.

    ------

    Eli- that is hilarious, when I comment on colleagues' grants I am constantly giving algebraic rhetoric like so.

  • Comradde PhysioProffe says:

    "Certainly I've read enough 3rd reviewer critiques that seem like they've been designed to blow with the wind based on which way the first two critiques go."

    It is impossible that any reviewers are "designing" their critiques to "go along" with other reviewers, because you don't get to see the other reviews during the READ phase until you post yours.

  • drugmonkey says:

    empty stock critique

    You realize that people who use what you or I think of as a Stock Critique bullet point may not themselves think of it as empty, right? Like with this Future Directions business. Some reviewers think such items are really important. They are not deploying them because they are lazy.

  • drugmonkey says:

    Would randomizing the order of discussion make the process more fair?

    Absolutely. This change (circa 2008ish?) was a disaster for score movement and for rescuing apps with significantly disparate scores from triage. I believe it was all part of Scarpa's plan to prove ("prove") that in person discussions were not necessary. (I have no evidence for that, btw, just a healthy dislike of many of his moves.)

  • AcademicLurker says:

    Having served on both in person study sections and ad hoc conference call study sections, I'm honestly not sure how much value added there is from in person meetings. Especially these days.

  • drugmonkey says:

    You don't ever see issues being resolved via discussion, AL? Disparate scores being resolved, mistakes being caught?

  • AcademicLurker says:

    Sure, but I've seen (or rather, heard) the same on conference call discussions.

  • qaz says:

    Yeah, but, PinkoPunko, back in those days, with 35% of the grants funded, that meant there was a high probability there would be someone to defend any grant in the top 50% (so you had to have a real conversation about it, and be ready for that conversation). That also meant that if you didn't like a grant, you needed to be able to explain why. And back in those days, grants tended to get in line (in DM's infamous holding pattern, waiting for the A2), so you could give critiques that made it more likely to come back better. So figure you wanted to give another 25% (beyond that top 50%) of the grants a reasonable chance to come back better. That's 75% of the grants.

    When I started reviewing grants, we were at the tail end of this system, and when I got to study section I was explicitly told "in this study section, a score of X [about 1.5/5] means I think it should be funded, a score of Y [about 2.5/5] means I want to see it again, and a score of Z [about 3.5/5] means don't come back with this project". (BTW, that left one more score of 4.5/5 for a real "stop wasting our time" message, usually used for grants that had basic problems [like the NRSA where the mentor had clearly not even read the gran, but had signed the cover page].) Our chair would start off the grants with a couple of the best (as determined from initial rankings) and a couple of the worst (as determined from initial rankings) so that we could calibrate our scores.

    What was interesting coming in to that system is that the more I saw of it, the more sense it made to me and the better a grant-writer I became.

    In today's study sections, we are EXPLICITLY told that scores are not information, that our job is not to make their grant better, that our job is simply to determine the ranking of the grants, so a 2/9 one cycle could be an 8/9 the next because scores are about where the proposal fits in the set of submitted grants.

    Given that scores are now explicitly not information and that grants are no longer "in line" and that we're only going to fund the top 5 to 10% of the grants anyway, why waste your time on more than 1 or 2 of the 10 grants you are supposed to read? You pick the 2 grants you like and decide to defend them. The other grants don't have a chance because the only grants that have a chance have all three reviewers liking them. (*)

    * For all the complaining here, this isn't actually how I work, nor is it how I've observed my colleagues working on study section. In fact, what I've observed is that every person on the study section that I've been on for the last several years has been extremely careful and knowledgeable about the grants they were assigned, that each grant we do discuss has been discussed with all intention to identify the positives, flaws, and likelihood of success, that people work incredibly hard to try to figure out what the grant-writer means and to determine if that would make a good project, that the entire study section works incredibly hard to get a fair ranking of grants.

    When I've seen 'stock critiques' used they are usually justified and accurate, even if the grant-writer doesn't think so.

  • qaz says:

    DM - I don't think the shift to doing reviews best to worst had anything to do with breaking in-person meetings. Rather, what the justifications we were told was that they wanted us to "spend the most time on the grants that mattered most". I think it was the real goal was the shift from a community of reviewers trying to make each other's science better to a simple ranking of grants for funding. If I remember right, this was also the time that they started triaging grants. There was an explicit statement that "if a grant wasn't in the fundable range, then we shouldn't waste time discussing it".

  • qaz says:

    AL - I have not found skype/conference call meetings very successful. They have all the problems of the face-to-face meetings and none of the advantages.

    However, I have found the new chat-room-based meetings to be very successful.

    Because they provide more time for each grant (5 grants discussed in parallel over 2 hours), I find they provide more opportunity for discussion and re-evaluation of scores. This is because there is time to really go look at a grant, to do some internet surfing to check up on mistakes, etc.

    In addition, because the chat-room based meetings are online (via typing), I've found much less dominance by aggressive over-confident members and larger participation by shy people and by those who don't have the same command of English.

  • AcademicLurker says:

    In addition, because the chat-room based meetings are online (via typing), I've found much less dominance by aggressive over-confident members and larger participation by shy people and by those who don't have the same command of English.

    Doesn't this give the advantage to people with years of experience arguing on the internet, stretching all the way back to the Usenet days?

  • Drugmonkey says:

    No comment.

  • Grumble says:

    "Doesn't this give the advantage to people with years of experience arguing on the internet, stretching all the way back to the Usenet days?"

    HA. Who knew that all that time I wasted arguing on usenet back in grad school (on topics completely unrelated to science) was actually part of the training experience!

  • Pinko Punko says:

    qaz, my experience over the last four years is that information content of reviews is going down. Good and bad reviews alike are not being justified in writing. This does a disservice to the science. I don't like it. This is not to say that there aren't panel members that are completely and entirely incisive and are such in short critiques. I have also seen short reviews that are entirely on point.

    If there are going to be conscious attempts to minimize bias, it seems that justifying what one writes would be one way to try to do that. It is not clear that serving up a nothing burger actually gives NIH guidance (even if there is no desire to help the applicant). If NIH wants to do the tiniest thing about the abysmal morale in this process, they would explicitly ask reviewers to give constructive critiques. I would say my first time at a panel, the majority of critiques were constructive. In last two years, this number has dropped

    I will agree that aggressive/over-confident members is a problem that is made worse by reduced engagement. If panel has not already read up on topic, such people can swing the entire room with tossed off or ill-considered statements, and again it is up to the assigned reviewers to really be prepared.

  • Pinko Punko says:

    CPP- if your review can be read either way, and you just score it in the middle, and you know you can just change it during read phase if you are on an island, it can go with the wind. Seems pretty simple, and not only not impossible, possible!

    DM- what I mean by empty is that if one doesn't bother referencing the grant in any way in bullet points, the critique appears as if grant was not even read- or that the critique might be used generically for any grant in the pile. Certainly I recognize that some stock critiques can be very much on point, but the very "efficient" (few words) reviewers I know still seem to be able to reference the grant and use some specific words.

  • drugmonkey says:

    As an aggressive/overconfident reviewer I will suggest that the impact of such individuals may be less than you assume.

  • Pinko Punko says:

    Depends. Recent personal experience suggests that on extreme end it can have huge, but of course situationally dependent, impact.

  • Skeptic says:

    It seems that reviewers don't give any thought to grants they don't like at first sight. I have seen comments mentioning something totally not in our grants. We work with model organism A. The reviewer says this grant is about model organism B. We propose to study protein X, and getting comment on protein Y, etc. Totally careless....

  • Philapodia says:

    @Skeptic

    You do realize that overly broad generalizations are bad, right? There are a few reviewers who do a crap job, a few who do a truly exceptional job, and the rest follow a nice Gaussian curve. People make mistakes and fuck up or they simply don't like you, but that's why there isn't just one person reviewing and all members can vote out of the range of the reviewers scores if they think it's necessary. And perhaps some of the critiques come from the applicant not making their case well enough or clearly enough. Not every grant is a special snowflake. Sometimes what you think smells like roses smells like durian to everyone else. We've all been there.

  • Pinko Punko says:

    Philapodia- this is why I fight for constructive critiques and more effort on reviews- from myself and from others. A lazy positive review will be exposed and corrected most of the time in discussion. If there are less great reviews below the line, they can be incredibly damaging to PIs psyche. Of course we say we've all been there, because we have all been there but for PIs coming up now, they won't be around for very long because it is harder and harder to "have been there". Certainly not all grants are great, but there can also be good science beyond errors of poor grantspersonship. Reviewers that cultivate those things even if the reviews are negative, help build for the future. Yes, yes, I know DM will come along and say that the job is to tell NIH what to fund, not help the applicant.

  • Philapodia says:

    Pinko Punko: I agree with what you're saying. I try to be a good reviewer, but reviewing in a constructive manner is a skill that not everyone naturally has. I won't say I'm a great reviewer, but as I get more experience hopefully I will become one. Perhaps if people view reviewing as a skill they need to develop rather than a necessary evil they have to get through then the reviews will get better in general.

  • Comradde PhysioProffe says:

    The culture on my study section is very careful.

  • Skeptic says:

    And perhaps some of the critiques come from the applicant not making their case well enough or clearly enough.

    Can't be for my case. It is very plain. The whole grant is dealing with one single model organism and the reviewer is criticizing a whole different organism. And basically, the reviewer does not like any model organism other than mammals.
    And this reviewer gave such a bad score, the grant gets triaged.

  • Philapodia says:

    I don't doubt your experience. Sometimes reviewers are careless dickes who push their own agenda. But your over-generalization about reviewers being careless is off base, as we all are doing this on a voluntary basis on top of all if our other responsibilities on basically a pro-bono basis. Most reviewers try to do a good job, but there are always a few bad-cops who make the rest look bad. Hopefully you won't get that reviewer again. Perhaps you can request in your resubmission letter that careless reviewer not re-review as they do not seem to have an unbiased opinion and won't give you a fair review? No reason you can't ask.

    BTW, all three reviewers can (and should) look at the other reviewers comments in the read phase before the meeting. Perhaps the other reviewers didn't feel it would be worthwhile to rescue your grant due to other reasons? It could be due to flaws in your grant, but just as easily be due to the high quality of the other projects being reviewed and it not being worth the salt to pull your grant out of triage.

  • Pinko Punko says:

    Philapodia,

    Definitely. I love science and I like reading proposals. Many times I learn something and I try to see past grantsmanship. I really look up to the skilled people on the panel but I get frustrated by reviews that just seem to be simulacra of reviews. Like maybe they don't know how to do it so they are sort of just saying what they think should be said?

  • drugmonkey says:

    Or they see it differently from you?

  • Pinko Punko says:

    DM, that goes without saying. That is always in play. I'm not talking about what I would perceive to be that kind of situation because that has no value to our discussion. Regardless of how one sees something, I am proposing that there are both more and less compelling ways to make a case in review. I propose that I would prefer if cases were more compelling and constructive and in a perfect world effort in review would be equivalent across ones entire pile. This seems noncontroversial. Yes, my comments beg the question that I might accurately be describing my experiences and impressions regarding recent trends in the process. I can report that some of these perceptions are shared by other panel members and they sort of attribute it to malaise and problems of morale. How could that not be the case in our current environment?

  • I propose that I would prefer if cases were more compelling and constructive and in a perfect world effort in review would be equivalent across ones entire pile. This seems noncontroversial.

    That may seem noncontroversial to you, but it is far from noncontroversial to me. As I see things, it is absolutely appropriate to devote substantially more effort to the review of grants that I perceive as having potential than those that I perceive as DOA.

  • Pinko Punko says:

    OK, modifying: meeting a minimum threshold of constructive review. Is grant DOA because science is a dead end forever, or is it DOA in current climate as in not perfect? It is a continuum. There are grants that are 9s an they are obvious 9s. In general, I am talking about the 4-7 range (if the panel spreads scores). For a non-spreading panel this would be the 3-5 range.

    There are other demands on attention as well. BSDs get more thorough review no matter what because generally there will be some fraction of reviews that are driven by Investigator score meaning bad grants can get floated- these grants might be equally as DOA but they get more attention and scrutiny based on who is proposing (this is what I perceive to be the case). Reviewers will go further to see what is really there for some applications. To be fair to all applications there should be some threshold that could be met for engagement. This doesn't seem to be that controversial. If you are an efficient or perceptive reviewer, because you spend less time on a grant doesn't meant that you are giving it short shrift, but reviewers that I look up to will revisit their first impressions on all their grants being self-critical of their initial assessments. That seems fair. Given that there are subconscious biases for all reviewers, how does one really assess ones preconceptions based on topic or investigator?

    My perception of panels is that reviewers outside their comfort zone for grant topic or field can be less engaged with those grants. It seems that recognition of this type of thing might allow mitigation via extra effort on those types of situations.

Leave a Reply