We always seem to go through a run of manuscript rejections and then a run of accepts. I wish we could figure out how to spread these out a little bit more. Would alternation be too much trouble, world?
Archive for the 'Science Publication' category
This is going to be another one of those posts where we mix up what should be so, what is so and what is best for the individual scientist's career.
Our good blog friend iBAM was musing about reading a self-empowerment book.
@BabyAttachMode book would say by expanding your circle of influence (helping to select reviewers, appealing rejections, etc)
— Kay Tye (@kaymtye) February 4, 2016
— Kay Tye (@kaymtye) February 4, 2016
The career prescription here...I agree with.
It took me quite awhile to realize that you can appeal when a journal editor rejects your manuscript. I thought a reject was a reject when I first started in this business. It certainly always read like one.
It turns out that you can appeal that decision.
And you should. The process does not end with the initial decision to reject your manuscript.
So what you do is, you email the Editor and you outline why the decision was a mistake, what you can do to revise the manuscript to address the major concerns, why your paper would be a fit for the journal, etc.
You may be ignored. You may be told "Nope, it is still a rejection".
OR. You may be given the opportunity to resubmit a revised version of the manuscript as if it were a new one. Which of course you could do anyway but it would be likely to get hammered down if you have not already solicited the invitation to do so from the Editor.
So you will see here that with a simple email, you have given yourself another chance to get the paper accepted. So do it.
After this we get into the fun part.
Should you appeal each and every decision? I think my natural stance is no, you don't want to get a rep as a chronic whiner. But you know what? There are probably people in science who rebut and complain about every decision. Does it work for them? I dunno. And by extension to sports, you know that phenomenon of working the ref about bad calls, hoping to get a makeup call later? That logic maybe applies here.
I do know there are plenty of testimonials like those of Kaye Tye that complaining about a rejection ended up with a published manuscript in the initially-rejecting journal.
Do appeals work the same for everyone? That is, given the same approximate merits of the case is the Editor going to respond similarly to appeals from anyone? I can't say. This is a human decision making business and I have always believed that when an editor knows you personally, they can't help but treat your submissions a little better. Similarly when an editor knows your work, your pedigree, your department, etc it probably helps your appeal gain traction.
But who knows? Maybe what helps is having a good argument on the merits. Maybe what helps is that the Editor liked the paper and slightly disagrees with the review outcome themselves.
HAHAHA, I crack myself up.
Okay so where do we end up?
First, you need to add the post-rejection appeal into your repertoire of strategies if you don't already include it. I would use it judiciously, personally, but this is something to ask your subfield colleagues about. People who publish in the journals you are targeting. Ask them how often they appeal.
Second, realize that the appeal game is going to add up over time in a person's career. If there are little personal biases on the part of Editors (which there are) then this factor is going to further amplify the insider club advantage. And when you sit and wonder why that series of papers that are no better than your own just happen to end up in that higher rank of journal well.....there may be reasons other than merit. What you choose to do with that information is up to you. I see a few options.
- Up your game so that you don't need to use those little bennies to get over the hurdle.
- Schmooze journal editors as hard as you can.
- Ignore the whole thing and keep doing your science the way you want and publishing in lower journals than your work deserves.
*not a fan of this myself but some people (including one of my kids' soccer coaches) appear to believe very strongly in this theory.
Dear Editor of Journal,
I find it interesting to review the manuscripts of ours that you have rejected on impact and quality grounds* over the past several years. We quite naturally found publication homes elsewhere for these manuscripts, this is how the system works. No harm, no foul. In none of these cases, I will note, was the manuscript radically changed in a way that would fundamentally alter the review of quality or impact as reflected in your reviewer's comments. Yet I note that these papers have been cited in excess, sometimes far in excess, of your Journal's Impact Factor. Given what we know about the skew in citations distributions which contribute to a JIF, well, this positions our papers quite favorably within the distribution of manuscripts you chose to accept.
This suggests to me there is something very wrong with your review process insofar as it attempts to evaluate quality and predict impact.
*journal fit is another matter entirely. I am not talking about those complaints.
For whatever reasons I was thinking of at the time I was motivated to twttr this:
Periodic reminder to publish. If you don't have manuscripts under first review by April... 2016 pub year is already slipping away from you.
— Drug Monkey (@drugmonkeyblog) February 2, 2016
What I mean by this is that somewhere on the pile of motivations you have to finish that manuscript off and get it submitted, you should have "keeping your calendar years populated".
It may not always be a big deal and it certainly pales in comparison to JIF factors, but all else equal you want to maintain a consistent rate of output of papers. Because eventually someone will look at that in either an approving or disapproving way, depending on how your publication record looks to them.
Like it or not, one way that people will consider your consistency over the long haul is by looking at how many papers you have published in each calendar year. Published. Meaning assigned to a print issue that is dated in a particular calendar year. You cannot go back and fill these in when you notice you have let a gap develop.
If you can avoid gaps*, do so. This means that you have to have a little bit of knowledge about the typical timeline from submission of your manuscript for the first time through until the hard publication date is determined. This will vary tremendously from journal to journal and from case to case because you don't know specifically how many times you are going to have to revise and resubmit.
But you should develop some rough notion of the timeline for your typical journals. Some have long pre-print queues. Some have short ones. Some move rapidly from acceptance to print issue. Some take 18 mo or more. Some journals have priority systems for their pre-print queue and some just go in strict chronological order.
And in this context, you need to realize something very simple and clear. Published is published.
— Chris Cole (@drchriscole) February 2, 2016
Yes, mmhmm, very nice. Pre-print archives are going to save us all. Well, this nonsense does nothing for the retrospective review of your CV for publication consistency. At present the culture of scientific career evaluation in the biomedical disciplines does not pay attention to pre-print archives. It doesn't really even respect the date of first appearance online in a pre-publication journal queue. If your work goes up in 2016 but never makes it to a print article until 2017, history will cite it as 2017.
*Obviously it happens sometimes. We can't always dictate the pace of everything in terms of results, funding, trainee life-cycles, personal circumstances and whatnot. I'm just saying you should try to keep as consistent as possible. Keep the gaps as short as possible and try to look like you are compensating. An unusually high number of pubs following a gap year goes a long way, for example.
January is a great time to look at yourself in the mirror and ask what your plan is for improving your record of publication.
What are your usual hurdles that get in the way? What are the current hurdles?
What works to get you moving?
My biggest problem is me.
We're at the point in my lab where available data are not really the issue, we have many dishes cooking along in parallel at most times. Something is always ready or close to being ready to serve up.
The problem is almost always the wandering of my attention and my energy to kick something over the final step to submission.
The game I have taken to playing with myself is to see how long I can go with at least one manuscript under review. I made it something like 14 mo a few years ago. Of course I then promptly fell into another extended dry spell but....
The other game I play with myself is to see how many manuscripts we can have under review simultaneously. That is, of course, much more subject to the ebb and flow of project maturation and the review process. But if we happen to have a few stacking up, sure I'll use the extra motivation to keep my attention pegged to finishing a draft.
When all else fails there is always "We need this published in order to help get this next grant funded, aiieeeee!"
At the end of December when everyone was out of the lab on vacation the Journal of Neuroscience twitterers ran an episode of Ask Me Anything, Neuroscience. I had responded to an earlier teaser on this and asked the acting Editor in Chief of the Journal of Neuroscience the question which titles this post, figuring she should know. Obviously, I shaded the question....a little.
— Drug Monkey (@drugmonkeyblog) December 24, 2015
— Marina Picciotto (@MarinaP63) December 24, 2015
..which is fascinatingly imprecise. Particularly for an EIC who has to decide categorically what is and is not appropriate material for the Journal she Edits. If we were talking about the range of investigation covered by the presentations at the annual meeting of the Society for Neuroscience, this would be a great answer. The breadth of science at that meeting is tremendous and I can buy that it covers almost everything "to do with neurons". This is not the case for the Journal of Neuroscience. Which should probably be re-named the "Journal of Some Neuroscience but not other Neuroscience".
As you will recall, Dear Reader, I have observed on more than one occasion that as a wee graduate student trainee I realized this fact with some dismay. I was outraged! How can this type of science be okay and this other type of science not, when the only difference is the techniques involved?!??, I wondered. How can these people not see that the Emperor's New Clothes are not better, more precise or more mechanistically insightful results, they are just different levels of analysis?
Over the *cough*cough*decades this attitude has turned to bemusement, particularly as the Journal of Neuroscience's JIF has slid inexorably* down (currently 6.3) into just-barely-above-the-herd levels (25th in the Neuroscience category). Just ahead of such titles as Glia and Brain Behavior and Immunity. It is behind the Journal of Pineal Research, ffs! Yes, yes, JNeuro still punches above its JIF in reputational terms with the cognoscenti but there are many JIF-equivalent-or-better journal options. And after all, we all realize that the JIF still rules where it counts- when people aren't assessing the science from an informed perspective. So the cost to those who do that other type of science involving neurons that is not acceptable for JNeuro has lessened considerably. The gains of sneaking one into the JNeuro have likewise lessened. Better to try at a less technique-limited venue that has a higher JIF
There was followup from the JNeuro twitter intern:
— Rogue JNeuroscience (@JNeuroscience) December 29, 2015
and a related reply from the acting EIC.
— Marina Picciotto (@MarinaP63) December 29, 2015
Also particularly amusing given the place that "shows mechanism" holds in the mind of the average bio-scientist type, most certainly including neuroscientists, these days. I'd like to see an accounting of how many J Neuro articles in a given year reasonably qualify as "New observation without mechanism". I'm betting the number is so low as to falsify this claim in any reasonable mind.
Then later there was this claim during an unrelated exchange:
@drugmonkeyblog EiCs follow the editors who follow the reviews. Very very closely. Reviewers are all of us.
— Marina Picciotto (@MarinaP63) December 24, 2015
Which I think is bizarre buck-passing for an Editor or Associate Editor of a Journal to engage in. At the least, it illustrates how and why it is bogus to claim "New observation without mechanism" is welcome-- if one only selects reviewers who will not buy this for a second then where are we? Also, I am curious if AEs use the presumption of what reviewers might say to desk-reject said manuscripts. See also, the above comments about what qualifies as "neuroscience" and whether or not certain approaches and techniques are ruled in/out at this particular journal. Speaking as a reviewer, I try to follow the Editorial lead in the sense that "appropriate for this journal" has to be recommended, I rely on what they have actually been publishing**.
In closing, I'll point out that I write this for the current version of younger-me. Those of you who aspire some day to publish in J Neuro, because you are a proud neuroscientist and proud member of the Society for Neuroscience. You who bring your posters to the Annual Meeting and then notice, chillingly, that science like yours never seems*** to get published in J Neuro. Have a heart. Leave your Imposter Syndrome behind. There are many so-called "more specialized" (that's meant to be an insult when reviewers or AEs say that, btw) journals which have better JIFs. Get your work published there. Keep coming to the SfN meeting and chatting with the folks who appreciate what you do.
Keep on with the science that satisfies you.
And feel free to snicker about those people who do cell biology accidentally in neurons and call themselves neuroscientists.
*all snark aside, I do lament this. J Neuroscience is a great journal and resisted Glamming it up and JIF chasing in response to the invention of Neuron and Nature Neuroscience. It is unfortunate it is being punished for this. And of course, before the aforementioned baby Glams, it really did shine as a pinnacle for a Society published journal.
**Unless, of course, I am engaging in a rather intentional pushback along the lines of what the JNeuro EIC is suggesting, i.e., putting my marker down that I think the journal in question should be publishing a certain kind of paper.
***Yes, there will be the occasional paper that gets into a given journal. And you will think "aha, we have something very similar so let's submit!". Give it a try for sure. But don't be too amped when you get desk-rejected. Often enough you will find out that the relationships between the editorial staff and the authors is slightly closer than you enjoy. Shrug and move on. Or, if a PI and you DGAF about your reputation at that particular journal, write a pointed inquiry to the AE to see what they say. I had one of these at a journal that rhymes with Serebral Kortex awhile ago. The new editorial staff tried to slam the old editorial staff and basically said, well that would never get in anymore. I was amused. And we published that paper somewhere else and moved on. As one does.
Reference to this https://t.co/hc9YYH8Myr popped up on the Twitter recently.
So what constitutes an "observation" to you?
To me, I think I'd need the usual minimum group size, say N=8, and at least two conditions or treatments to compare to each other. This could be either a between-groups or within-subject design.
I can't believe I have never blogged this issue.
— Zenbrain (@zenbrainest) December 24, 2015
Obeying the alleged word or character limits for initial submission is for suckers. It puts you at a disadvantage if you shrink down your methods or figure count and the other group isn't doing that.
Remember when it was possible to publish a four-part series on one "complete study"? https://t.co/f4wNgpl4lK
— Drug Monkey (@drugmonkeyblog) December 8, 2015
— Drug Monkey (@drugmonkeyblog) December 9, 2015
Per usual, I throw out some observation or random remembrance and then it nags at me.
I come to the realization that perhaps the kids these days actually genuinely have no idea that there is/was/can be a better way.
Like when I remind you that Science and Nature "papers" were once barely more than abstracts. With a single figure, maybe two. And that the real followup paper was in another journal. Seriously, look back in the early 70s, maybe into the 1980s. The issues are available for your perusal.
This is another example. Two cases in which the same group published (or at least prepared to publish) at least four different papers from a single project. In the first case, it looks like the same three authors were on all four, they put it in the same journal and the first authors swapped on one. In the second case, the author list was more diverse and there were three different journals. (Interestingly, report III seems to be missing. I wonder what happened there? But still, the group published several other papers around the same time and on the same rough idea- perhaps one of those was supposed to be the III article?)
This sort of thing reinforces my criticism of the way Glamour Humping has done bad things to science and careers while not really providing anything more than a sham of the "complete story" in exchange.
If you want to publish several manuscripts on a topic, with different unshared unique first-author and last-author slots, it is possible. You get to throw up far more than a single published manuscripts' limited number of figures. You can elaborate on side themes. Nothing gets hidden from view in the Supplemental Materials. And presumably the speed by which some of the story emerges in published form is enhanced. Which permits other people to see and use the information earlier.
It was possible once. It is possible again.
Someone forwarded me what appears to be credible evidence that Wiley is considering taking Addiction Biology Open Access.
To the tune of $2,500 per article.
At present this title has no page charges within their standard article size.
This is interesting because Wiley purchased this title quite a while ago at a JIF that was at or below my perception of my field's dump-journal level.
They managed to march the JIF up the ranks and get it into the top position in the ISI Substance Abuse category. This, IMO, then stoked a virtuous cycle in which people submit better and better work there.
At some point in the past few years the journal went from publishing four issues per year to six. And the JIF remains atop the category.
As a business, what would you do? You build up a service until it is in high demand and then you try to cash in, that's what.
Personally I think this will kill the golden goose. It will be a slow process, however, and Wiley will make some money in the mean time.
The question is, do most competitors choose to follow suit? If so, Wiley wins big because authors will eventually have no other option. If the timing is good, Addiction Biology makes money early and then keeps on going as the leader of the pack.
All y'all Open Access wackaloons believe this is inevitable and are solidly behind Wiley's move, no doubt.
I will be fascinated to see how this one plays out.