Rejections and "Rejections"

May 15 2008 Published by under Peer Review

DrDrA at Blue Lab Coats just had a paper officially "rejected", with the following initial reaction:

Its ok, these things happen and its just a paper. I'm not really upset about it that much and will turn it over somewhere else.

Hey! Not so fast with the resignation! One thing I have learned over the years is to never take a paper "rejection" as a rejection until an editor tells you personally--not using automated boilerplate language--that she absolutely refuses to reconsider the paper.


We have a paper in press right now at a moderately high-impact journal that was originally "rejected without opportunity for revision". I e-mailed the editor and said, "These reviews don't look so bad. What's the real deal, motherfucker?"
The editor responded that the reviewers felt it was not a complete story, but if we could do x, y, and z, we could appeal the rejection. We did x, y, and z, appealed the rejection, and were invited to resubmit. (The fucking journal did, however, nail us for a second submission fee, the bastards!)
After peer review of the resubmission, we were were asked to do minor revisions subject only to editorial re-review, and now we are in press!
In DrDrA's case, the issue seems to be more one of the reviewers combining a partial misunderstanding of the importance of the work with a partial misunderstanding of what scope of work would be reasonable to expect in a single publication.
In a situation like this, it can't hurt to politely and graciously explain how, in one's humble opinion, the reviewers made mistakes in the review, and ask the editor if, in light of these arguments, she would reconsider at least re-reviewing a revised version. The worst case scenario is that the editor still says no, but even then you are likely to learn more about the basis for the rejection.
Another benefit is that, again even if the editor refuses to reconsider, you have the opportunity to demonstrate to a journal editor that you are a cogent, gracious, reasonable author, and establish a bit of a personal relationship with the editor. And needless to say, every single little opportunity to make a connection with a person who has some power over your career--no matter how evanescent--must absolutely be taken advantage of.
One last point: When you aproach an editor via e-mail to feel out whether a rejected paper might somehow be reconsidered for revision, you need to balance brevity with substance. You don't want to just make a naked plea for mercy, with no substantive reason at all given for thinking that reconsideration is merited. (In our case, this was the fact that the reviews "didn't look so bad".)
But you also don't want to provide a three page point-by-point rebuttal until you are invited to do so. This just makes you look like a wackaloon crackpot, will send the editor running the other way, and will not garner you any "cogent, gracious, reasonable author" brownie points that could be cashed in on some future submission.

59 responses so far

  • Katie says:

    I've often wondered how rejections worked - if you were supposed to just move on when a paper was rejected even if you thought you could fix everything in the reviewer comments. So thanks for the very smart and helpful post.

  • Orac says:

    Another thing that young investigators sometimes forget: You don't always have to do what the reviewers demand when you resubmit. There have been more than one occasion when I've seen reviewer requests that were clearly unreasonable and, in my response, I politely said why we weren't going to do what the reviewers asked. If you can justify why you didn't do what the reviewers ask with a reasonable explanation, you can often get your paper published without doing a whole bunch of extra experiments.

  • juniorprof says:

    Amen!! This is perhaps the single most important lesson that my postdoc mentor imparted to me. And, as an editor, said postdoc mentor also noted the same thing about your letter to the editor. Make it persuasive, but make sure its brief!
    The fucking journal did, however, nail us for a second submission fee, the bastards!
    That's nothing, they nailed me 3 times on my journey through final acceptance at a moderately high-impact journal (which I suspect might be the same one). Let's just say it was a long and arduous fight but success was sweet when it finally came (and thank FSM for free color!!)

  • whimple says:

    I agree. Always appeal rejections. We had a paper that was "declined to be reviewed", also at a moderately high impact journal. I wrote the editor to the effect that he was crazy not to review our paper, that it was important for reasons x, y, and z, and that I didn't think he was going to have anything better cross his desk for quite a while. I finished with something along the lines of, "thanks anyway, maybe we'll do business again sometime in the future." The editor wrote back right away asking if we'd like to get our paper reviewed. Say what? Uh, yes, that would be super. The reviews come back and after some minor text revisions, the paper is published (it really is a good paper).
    I think the editors don't have an easy job. If something really new hits them, it's hard for them to know what to make of it. In this case, I think the editor initially liked the work, but was unsure what it meant, and so got the opinion of someone who just pooh-poohed it, for whatever reason. Lots of people try to bamboozle the editors, trying to sexy up the work etc., and that makes the editors naturally cautious. But if you've really got the goods, you owe it to yourself and colleagues to really do everything you can to get it published as high up as you can.

  • PhysioProf says:

    and thank FSM for free color!

    AKA, strongarming tons of authors into joining the society? Yeah, same journal!

  • juniorprof says:

    Yeah, the schmucks. Like they don't get enough of my money anyway. I think I'll protest by not using their slides for my BAW presentations next year. That'll show 'em!

  • PhysioProf says:

    Another thing that young investigators sometimes forget: You don't always have to do what the reviewers demand when you resubmit.

    Great point, holmes!

  • ikemi says:

    moderately high-impact!! haha

  • S. Rivlin says:

    PP,
    My experience exactly.
    BTW, if the editor is a 'she,' how could she be a 'motherfucker'?

  • doh says:

    this explains why papers i've reviewed and rejected 3 times still end up being published in the same journal i was reviewing for.

  • drdrA says:

    PP and S.Rivlin-
    The editor is a 'he' so PPs colorful description is gender appropriate.
    And just so y'all know I'm writing the letter.

  • BugDoc says:

    Does this discussion make any of you wonder if more editors are needed? These comments suggest that in general, journal editors are too busy and overworked to give careful consideration to submitted manuscripts, which is the whole point of the review process (I thought). To be fair, I have worked with some really fantastic editors, but with many I get the feeling that they have way too much to do.
    If it becomes standard procedure to appeal every rejection, then it seems like the editors are building more complicating factors into the process, thus tying up more of their time.

  • Craig says:

    In the era of Pubmed, Google Scholar, etc...is it better to battle it out with reviewers at a high tier journal or go into a lower tier journal with the knowledge that the editor has assured immediate publication? In other words whats more important impact factor vs. time?

  • BiophysicsMonkey says:

    I was pretty clueless on this issue until I became a PI. I had always treated reviewer decisions as the final word. When one of the early papers out of my lab was rejected for what I considered to be inane reasons, a senior colleague said "why don't you argue your case to the editor?". This was an eye opener for me: "What! you can argue with a rejection?".
    I guess I was naive.

  • PhysioProf says:

    In other words whats more important impact factor vs. time?

    Impact factor. And it's not even close. And the existence of Pubmed, Google Scholar, etc doesn't have jack shit to do with it.
    (There are very rare exceptions to this rule that relate to certain absolutely hard deadlines--tenure/promotion decisions and competing grant renewals--but they are are truly rare.)

  • DrugMonkey says:

    In the era of Pubmed, Google Scholar, etc...is it better to battle it out with reviewers at a high tier journal or go into a lower tier journal with the knowledge that the editor has assured immediate publication? In other words whats more important impact factor vs. time?
    Yes. No. It depends.
    http://drugmonkey.wordpress.com/2007/10/19/thoughts-on-the-least-publishable-unit
    To summarize for the lazy, one should be actively educating oneself on what the expectations are for the type of science you are in and intend to continue with in the future.
    http://drugmonkey.wordpress.com/2007/07/31/researching-your-cv/
    There are no simple one-size answers and in fact relying too much on the perspective of just one individual or a currently dominant meme in the field(s) can be highly destructive. In two ways. First, by setting a mindset that the only way to succeed is to hit a standard which you have no shot at hitting. [hint, big startup offer, major rep R1 university med school, multiple R01, huge lab, CNS publishin' career is not even the median expected value if you consider all NIH funded scientists with perfectly viable career paths] Second, by setting the mindset that if you just do what everyone in your narrowly defined area has done in the past that is sufficient.
    My most generic response to the question is that you must attend both to Impact Factor and output rate. The one thing you can most easily control and cannot replace at a later date is output rate. Others (hi PP!) might observe that too much attention paid to a steady output rate essentially prevents you from engaging in the type of high risk activity that will lead to Nature and Science pubs.
    All I can say is that from the perspective of my low profile area of grant reviewing, one gets major props for sustained output at the society journal level and there is no particular expectation that one has to have CNS pubs. One also gets major props for CNS level pubs and this can make up for multi year gaps and/or a sharply reduced number of pubs relative to the steady slogger. Although I have seen at least one critique about rate raised until some asst prof mentioned that "d00d, a Science paper over a several year interval more than makes up for five generic pubs!"
    The thing that I cannot see working at all is gambling on a CNS pub, not generating incremental pubs and not having that CNS pub when trying for a grant or promotion. Maybe, just maybe, your local department will peer into your ongoing work enough to appreciate what you are trying to do but the grant reviewers won't give a crap. Pubs matter.
    and of course, the importance of future performance shifts all around depending on your past performance for trainees (and beyond). stage of career changes the demands. etc.

  • DrugMonkey says:

    There are very rare exceptions to this rule that relate to certain absolutely hard deadlines--tenure/promotion decisions and competing grant renewals--but they are are truly rare.
    See? this is what I mean. PP doesn't know wtf he's talking about...for career paths similar to many with which I am familiar in areas closest to my own scientific domain.
    OTOH, he's spot-on for a certain other type of career path with which I am overwhelmingly familiar due to personal relationships and institutional colleagues.

  • JSinger says:

    Does this discussion make any of you wonder if more editors are needed?
    Agreed, and the fact that their business model is evaporating underneath them probably isn't helping any.
    Fortunately Open Web 2.0 is going to replace them with dueling mobs of atheists, global warming skeptics and lolcat enthusiasts.

  • juniorprof says:

    Fortunately Open Web 2.0 is going to replace them with dueling mobs of atheists, global warming skeptics and lolcat enthusiasts
    Funniest thing I've read in quite some time. Thanks!!

  • PhysioProf says:

    Dude, are you implying that I am some kind of elitist who is out of touch with the common scientist!? HAHAH!
    Srsly, yes, you are correct. My entire modus operandi has always been, and continues to be, shooting for the high-risk, high-impact science.

  • DrugMonkey says:

    My entire modus operandi has always been, and continues to be, shooting for the high-risk, high-impact science.
    and my modus operandi is that I have some shit I want to figure out and some questions I want answered.
    unfortunately the deification of "high-risk, (allegely)high-impact science" gets in the way of that now and again.
    so I rant. every now and again.

  • drdrA says:

    DM-
    I guess I work more like you- driven by the fascinating and unsolved biological problem- it's a nice end- but you need means to get there... high impact helps with that... as I'm certain you know.
    BugDoc-
    I think you are spot on on the shortage of editors. The three sentence review (or lack thereof) highlights this almost exactly, it should never have slipped through into my hands. It makes everyone involved in the whole review look awful- but you can easily see how this happens with overworked basic scientists (for the most part) taking on editorial responsibilities in addition to their own real must-generate-my-own-papers-and-funding jobs.
    We expect peer review to be quality control in some part, for what gets published- but where is the quality control for the peer review when the editor is not paying attention???

  • drdrA says:

    And one more thing- I'm mad about the ability of people to make stupid and irresponsible comments and hide behind the anonymity of peer review. Anonymous peer reviewers can't be held accountable for anything they say, no matter how silly- and that just doesn't seem right....

  • BiophysicsMonkey says:

    My subfield is more like DM's. Steady output in the major society journals is what you're really judged by. CNS papers are just icing on the cake.
    It really does seem to be different in the more cell bio type areas. I'm still surprised when I see people like on sites like YFS (and sometimes here) assuming that CNS papers are pretty much required in order to get a job or have a prayer for promotion.
    Makes me glad that I'm in the area that I'm in.

  • Lab Cat says:

    Sigh. That was another piece of bad advice I was given at my old U. They were completely clueless there.
    Where were you five years ago when I needed advice like this?

  • PhysioProf says:

    [M]y modus operandi is that I have some shit I want to figure out and some questions I want answered.

    This is completely consistent with also always shooting for high-risk, high-impact science. The implication that those scientists who do aim high are not driven by desire to understand how shit works is grossly misleading, as is the implication that the science published in high-impact journals is "crap" or "unsound" (which I know you didn't make, but I'm just sayin').
    There is plenty of unsound crap published in lower-impact journals as well. It's just that no one finds out or gives a shit.
    We all do what we gotta do to be successful--keep our jobs and get to do science in the way that we enjoy--according to the metrics we are judged by. I completely stumbled into my post-doc without giving any thought to anything other than, "Hey! It's a fucking hoot to do these kinds of experiments."
    My mentors happened to be C/N/S type dudes, and the next thing I knew, I was too. And now I'm a PI myself in a situation where that's the expectation. And I'd be lying if I said I don't get off majorly on the thrills and chills of competing in that arena.
    But I really resent the implication that aiming research programs for high risk and high impact science is somehow unsavory or shady or not in the best traditions of scientific excellence.

  • An aside, since the phrase 'high risk, high impact' is getting thrown around much: How does one define high impact? Is it merely C/N/S publishable or a fundamental, long-lasting, novel contribution to a field? What is the frequency of the latter anyways?
    I think it is much easier to identify high-risk a priori than high impact. True impact is almost always only revealed by time.
    Like I said, an aside.

  • neurolover says:

    "I guess I was naive."
    Indeed, I hope that enough people read this discussion and aren't so naive. It's nice to hear that I'm not the only one (and, that boys do it to)!
    "But I really resent the implication that aiming research programs for high risk and high impact science is somehow unsavory or shady or not in the best traditions of scientific excellence."
    I think that's true -- but a lot depends on the science itself. I'm am currently frustrated with the "high impact" publications in my field, because the limits demand something "NEW" (which, in my field, means probably not true, or at least provable in the space of 4 pages of text) and rewards burying of details and exaggeration of meaning, as well as parsing of continuous difference into categorical difference. It's frustrating, and I think that people outside of the field are starting to notice it, too -- notice that the big deal stuff lacks substance.
    I used to work in a slightly different field where the questions were tighter, and could be answered with a more limited set of experiments and the CNS papers weren't quite so problematic.

  • Mr. Gunn says:

    I don't think anyone disagrees, but just to make sure it's clear, you won't ever get anywhere flat out telling the editor the reviewers are wrong, no matter what reason you give. Even if it's absolutely, uncontestably, provably true that the reviewer is totally wrong, you'll get nowhere arguing. You have to at least act,/i> like you're taking them seriously.
    For example, if they say, "You need to do A, B, and C, and you should delete D because it's not really that good." Take a stab at doing A, B, and C (or their equivalents)*, and consider whether or not D really is that good. If it is, add some supporting verbiage and leave the damn thing in. Maybe back off from the least-supported of your claims regarding it and see if it'll fly then. The point is to do something related to each thing mentioned so that you can then send a letter back describing how you've addressed each of the points in turn and it will look like you took them seriously even if you think they're fucking nuts.
    * You don't have to do A, B, and C exactly, either. You can do E, and show how it addresses the concerns underlying his comments.

  • Craig says:

    As mentioned above, maybe the variable is tenure. I spent over a year going back and forth with an editor at Cell on a review/revise cycle. I don't have tenure yet, so I got nervous (I stay nervous), split the paper into 2 and went to JBC and Biochemsitry...done in a month. Most of the time during that year of revisions for Cell was spent providing more and more data supporting conclusions that were already sufficiently proven by the original data set. The time was NOT spent testing new ideas. In short, I don't think I am going back to CNS unless I have some muscle backing me up, i.e. a big name to twist the editors arm.

  • freedomfries says:

    that reminds me... someone i know from france told me that the new requirement for a tenured professor position in france is one first author Nature paper. i thought that was an interesting requirement.

  • Pinko Punko says:

    PP, I don't think DM was implying exactly that. Some people are CNS without compromise, others are CNS and totally evil, amoral bastards. Still others blindly go that route not even understanding the compromises their training and research programs suffer through while they bounce around chasing elusive hot shit, and completely bypassing or even deep-sixing any incremental progress, along with post-docs' careers.
    The other sad part is that the trend now, especially for CNS journals is to relegate almost entire papers worth of data, especially controls, into supplemental material. The science suffers because important experiments that should truly be vetted by readership are marginalized in the name of larger amounts of red meat and increasingly unrealistic expectations for what a "complete story" is.

  • uhhuh says:

    i agree with pinko punko. what is up with shoving all the important details into the supplement? how about making a paper make total sense in one pdf?

  • In other words whats more important impact factor vs. time?
    Impact factor. And it's not even close. And the existence of Pubmed, Google Scholar, etc doesn't have jack shit to do with it.
    (There are very rare exceptions to this rule that relate to certain absolutely hard deadlines--tenure/promotion decisions and competing grant renewals--but they are are truly rare.)
    ***********
    There is also factoring in speed to publication. If you can go slightly down in the impact factor (still high but not as high) and publish before your competition, then go for publishing first. The trade off is worth it so long as you don't go too far down the impact scale. This of course means you need to have a good ear to know what your competition is up to.

  • PhysioProf says:

    If you can go slightly down in the impact factor (still high but not as high) and publish before your competition, then go for publishing first.

    And do we want to discuss attempting to strongarm editors by leading them to believe that one of their competitor journals might have a similar paper in the pipeline (which might be true)?

  • BugDoc says:

    "And do we want to discuss attempting to strongarm editors by leading them to believe that one of their competitor journals might have a similar paper in the pipeline (which might be true)?" PP-
    Yes, let's!!!
    "Most of the time during that year of revisions for Cell was spent providing more and more data supporting conclusions that were already sufficiently proven by the original data set." Craig-
    This has been my experience also. I have had reviewers suggest some great experiments that made our papers better. However, the bulk of the suggestions seem to be of the "just do more experiments" variety without the reviewers defining why these experiments will improve the conclusions of the paper (and in many cases they don't). In one case, we were asked to redo experiments that someone else had already "published" in a Nature paper (as data not shown, haha), even though it was irrelevant to our conclusions! Fortunately, some editors are reasonable about this, but as Mr. Gunn says, one still has to make a good faith effort.

  • DrugMonkey says:

    But I really resent the implication that aiming research programs for high risk and high impact science is somehow unsavory or shady or not in the best traditions of scientific excellence.
    I can see how the whole thrust of the conversation might tend to imply that this is what I was saying. I wasn't and apologies for the inadvertent implication. In this particular comment I was trying to get at the fact that while in theory I wouldn't give a rat's patootie about the impact factor chase / game thing...I do care because it has real impact on my ability to do science the way I wish and the way I think it should be done.
    As to the larger point, PinkoPunko expresses nicely the some/all distinction.
    There is plenty of unsound crap published in lower-impact journals as well. It's just that no one finds out or gives a shit.
    We've touched on this before and clearly there are insufficient data to make strong arguments. There is little doubt that the data that are available suggest a higher fraud rate with higher profile journals. You attribute this to greater interest in "catching" it. I attribute this to motivational factors of risk/reward for those that focus on publication in those journals. Then there is the personal anecdote side in the sense of chronic discussion of peer's data-fakery amongst the postdocs in one type of CNS publishing lab and the utter absence in another nonCNS-but-HighlyCited labgroup of similar size.

  • PhysioProf says:

    You attribute this to greater interest in "catching" it. I attribute this to motivational factors of risk/reward for those that focus on publication in those journals.

    I don't buy this at all. I believe that there is proportionally as much, if not more, fraudulent shit published in low-profile journals. Shady characters know that they can eke out an existence in science under some circumstances by just publishing boring fake shit that no one will ever read, cite, or attempt to replicate. I am convinced this kind of under-the-radar thing occurs as much as the "huge splash" kind of fraud that we read about in the science press.
    BTW, thanks for the apology! People are gonna think we're getting fucking soft, with all this kumbaya shit!

  • neurolover says:

    I disagree with you physioprof -- I think there's a clear relation (in all endeavors) of a reward/fraud trade off. I've now gotten old enough that I've noticed a certain trend towards enormous financial gains (savings & loan, arbitrage, energy trading, stock options . . . .) followed by a scandal. Then, everyone tries to act like a few bad apples did something beyond the pale, rather than that when the rewards become significant enough, people end up shifting their thresholds on their ethical decision making.
    You may be right that the reward/risk ratio doesn't correlate directly with high impact journals (in scientific endeavors). Say, the 50th CNS paper for a PI is probably a lot less important than a junior prof just getting a paper out at a lower ranked journal. So, reward needs to be translated into expected value, or whatever the current parlance is for translating the absolute value of a rewrad into relative value.
    But, I can see a lot more people, oh, say, throwing their post-doc to the wolves, or their data away if the reward is a 100 M dollars than if it's a paltry 5000. True, a particular person may need to 5K more than someone else needs the 100M. But, other things being roughly equal, the reward itself will manipulate the way people behave.

  • whimple says:

    PP: Shady characters know that they can eke out an existence in science under some circumstances by just publishing boring fake shit that no one will ever read, cite, or attempt to replicate.
    Really? Who's funding the boring fake shit that no one will ever read, cite or attempt to replicate? Doesn't happen today. The current tight funding environment has to be pushing the bogus stuff into higher and higher IF journals.

  • bayman says:

    I disagree with you physioprof -- I think there's a clear relation (in all endeavors) of a reward/fraud trade off....
    I think DM's high reward scenario definitely drives fraud in high impact journals. However, PP's ekeing out an existence on boring fake shit is also tremendously prevalent, the motivating factor being that it is so damn easy. A favorite strategy of EasyRiders - it is quicker and takes less work to get sketchy shit published in a low tier, obscure journal than it does to get it published in CNS.
    I'd be willing to bet that whether the struggling new PI pushes to get shit published in a low or high tier journal depends more on where their institutional colleagues are publishing, as it set the bar for what the PI needs to do to keep his/her job. Post-docs maybe a different story, their more desperate situation and lack of job security driving them toward higher risk submissions.

  • Pinko Punko says:

    I don't know anyone getting by just crapping in lower tier journals. Obviously these people exist. They don't in the circles I currently tread water in. What does exist in my ecosystem are groups that constantly publish sloppy work that everyone else coming later has to attempt to reproduce because the bar is higher if shit is already in the lit. This is such an arbitrary bar it is insane. With new technologies first rounds are higher profile and generally sloppier, considering the competition to publish. This takes a lot of the incentive out of doing something right in terms of wanting to use such results as the foundation for further research. It is a tough system, with plenty of flaws.
    I see myself as recognizing the benefit of higher profile publishing, but also recognizing the benefit of thoroughly producing high quality work, and given some particular circumstances the time it might take to do an experiment right with limiting resources is mutually exclusive with being competitive with a lab that has more benchpower and resources. Very few of us have the luxury of working in so isolated a niche that we can exist in a compromise free zone.
    A friend just had a wonderful study published in a CNS venue, and they fought for two years to get it published, the work not having changed much in the two years.

  • Noah Gray says:

    And one more thing- I'm mad about the ability of people to make stupid and irresponsible comments and hide behind the anonymity of peer review. Anonymous peer reviewers can't be held accountable for anything they say, no matter how silly- and that just doesn't seem right....
    I found this quite amusing...

  • JSinger says:

    I've now gotten old enough that I've noticed a certain trend towards enormous financial gains (savings & loan, arbitrage, energy trading, stock options . . . .) followed by a scandal.
    This is the same ascertainment bias as in the original claim about high-profile journals. My local pizza store was shut down for tax improprieties but you never heard about it.

  • I found this quite amusing...
    Why's that? Because we're pseud-blogging and you find it hypocritical? That's funny, because last I checked, nobody here has the power to affect someone else's career simply by posting anonymously, no matter how dopey/useless/antagonistic the post. Reviewers, on the other hand....

  • juniorprof says:

    Dr. J, I imagine Noah finds it amusing because he is an editor -- an editor who knows the names of reviewers that do a bad job. My guess would be that Noah is as put off by bad reviews as authors are (perhaps in a slightly different way) and that he would be inclined to avoid reviewers that cannot help him do his job effectively (get quality papers into the journal and make sure the peer review improves the quality of said reviews).

  • drdrA says:

    'I found this quite amusing..'
    Well, I second Dr. J. I don't know if Noah is referring to the fact that I blog without using my actual name- but I don't have the power to influence the acceptance of any papers or the funding of any grants by pseud-blogging. I can, and have, privately revealed my identity from time to time with other bloggers... but I will not do this publicly.
    There is a world of difference in protecting one's identity on the internet, where any maniac with a computer can see you, than from a very small group of author's whose paper you are entrusted to review fairly and accurately in a very controlled process.
    Secondly- I get that everyone is busy, and that there are responsible reviewers and editors out there. But being busy is not an excuse for doing a job half-ass-backwards, as my dad would say. When a reviewer turns in a three sentence review that could apply to ANY paper that crossed the desk and takes over two months to do that- and the editor passes those remarks on to the authors - this is just EMBARRASSING.

  • bill says:

    I, too, would like to hear from Noah Gray on just what's so funny.

  • Noah Gray says:

    Calm down everyone; I detect a bit of paranoia and oversensitive behavior here...
    Anyway, short, terse reviews that recommend rejection are the most useless documents in the world of publication. The only time a review should be 3 sentences or less is when you are praising the authors and recommending publication. As my voice of reason, juniorprof, stated, it is important to realize that editors (and I can only speak for the journals with professional editors, not those run by an editorial board) make more work for themselves if three reviews come back with nothing to say other than the paper sucks and the study is subsequently unceremoniously dumped. Of course the author is going to appeal!! Thus, the editor has just doubled the amount of time that now has to be spent on that submission. Going back to an earlier part of the conversation, although submission volumes are quite high, I don't really think that overworked editors are the reason for the breakdown of the review process rather than poor reviews making the entire situation awkward and overly complicated for everyone.
    Bad reviewers are avoided. However, as you can imagine, we must "test" and "calibrate" new reviewers all the time in order to keep the pool fresh and diverse. Therefore, sometimes the newbie turns out to be a dud. drdrA, I'm not going to lie to you and tell you that you only had 2 reviews when you really had 3, but one just sucked. I'm going to send you all the comments. HOWEVER, I am going to explicitly tell you the reason for rejection in an attempt to make up for the crappy report. This report should be and will be ignored.
    I personally strive to customize as many of my rejection letters as I can. I keep track and I'm at a little under 75%. I also receive the least number of appeals on my team, to my knowledge. Therefore, hearing explicit reasons for the rejection (without review or after review) must help the authors to decide how to proceed. As PhysioProf said, if you believe that the decision was heavy-handed, or that you can actually address the crux of the concerns, or if it seemed that the editor favored a review containing inaccuracies, why wouldn't you plead your case to the editor? However, if an author has a clear explanation in front of them, they may decide to move on and save the extra 3 weeks to ?? months in the appeal process.
    The one piece of advice I give to anyone who asks (who are usually younger investigators) is to make the peer review/publication process as transparent as is possible. It is not and doesn't have to be a black box. Talk to editors at meetings. Accept reviewing responsibilities when asked. If you lack an explanation for a decision, get one. All of these things are part of a learning process that the scientists deemed in some circles to be "successful" have mastered for years. "Big names" don't necessarily get special treatment, but their years of experience reviewing for, submitting to, and interacting with a particular journal has taught them how to produce a paper that has a good shot at being reviewed and accepted. They have learned to step back from their work and see what the editor and reviewer will see. If you can't get a perspective on your own work, find an opinion from someone you trust to be honest. Sometimes, 3-6 months of additional work will allow for a better case to be made with your study. Sometimes not.
    In particular, I also want to stress the point about reviewing. Most journals will send the referee all of the other reports after a decision on a particular paper has been made. A young investigator should study these reviews, come to grips with the decision that was made, and learn from the discrepancies between his/her report and those of the other refs. Remember, your next paper is likely to go to some of those same reviewers the next time you submit. Therefore, learn to understand what the field wants.
    In any case, I have strong opinions about these kinds of things and love to talk about them. So next time you see me at a meeting (sorry non-neuroscientists...), come introduce yourself and pepper away with the questions.

  • PhysioProf says:

    Calm down everyone; I detect a bit of paranoia and oversensitive behavior here...

    This kind of passive-aggressive bullshit is not necessary. You made a very oblique comment clearly intended to elicit a response, and you got that response. There is no need to be a dick about it.

  • Noah--Sorry to jump down your throat, as it appears I misinterpreted you completely.
    it is important to realize that editors (and I can only speak for the journals with professional editors, not those run by an editorial board) make more work for themselves if three reviews come back with nothing to say other than the paper sucks and the study is subsequently unceremoniously dumped
    What makes you think that editors at journals run by scientists are any less taxed by poor reviewers? I think an author would be just as likely to challenge a bad review at a scientist-run journal as she would at a non-scientist-run journal.
    Of course it's possible that at scientist-run journals, the working scientist-editors know the field a bit better and are therefore less likely to send a manuscript to a moron to begin with.

  • Jim Thomerson says:

    A true story for your amusement; the best review I ever had. "I am sorry, but I have lost the MS you sent for review. I recall reading it. I found it extremely well done, about important phenomena, and appropriate for the journal. I recommend it for immediate publication without revision,"
    So there!

  • Pinko Punko says:

    Pops Punko has gotten his own papers to review before. Priceless.

  • drdrA says:

    Pinko Punko- That is hysterical. I hope you wrote YOURSELF a nice review! just joking!

  • Pinko Punko says:

    I'm Punko Junior, but the Pops I believe said "the elegance, clear thinking and erudition of this work is only surpassed by the handsomeness of the corresponding author. This journal should be honored to publish the work."

  • drdrA,
    And one more thing- I'm mad about the ability of people to make stupid and irresponsible comments and hide behind the anonymity of peer review. Anonymous peer reviewers can't be held accountable for anything they say, no matter how silly- and that just doesn't seem right....
    I am so in agreement with you there.

  • whimple says:

    Although, the "anonymous peer reviewers" are not anonymous to the editor of the journal.
    We could consider an experiment along the lines of: the identities of the reviewers are unknown to the authors of the paper, but ARE known to each other.
    This form of "semi-anonymous" peer review might give reviewers a little more accountability, without damaging the ability of reviewers to speak their minds openly. NIH study section currently works along these lines (if "works" is the correct term).

  • drdrA says:

    Pinko Punko-
    yes, I picked up that you are Jr. upon re-reading- Pops punko's review is cracking me up, as is Jim Thomerson's story up there...
    Whimple- Is NIH peer review really semi-anonymous?? It seems pretty easy to tell in the study section I send to who reviews my grants just based on the publically available list of who is on section in that round, based on reading the reviews, and based on who has to excuse themselves because they have a conflict with someone on the grant. It's anonymous only in the sense that they don't advertise who they are- but my field is a pretty small group.

  • drdrA says:

    Yes, the spelling part of my brain isn't working this evening. sorry...

Leave a Reply