Did you read this fascinating bit of....whatever...from Nature? I picked this up from writedit's thread. The title and subtitles are, I kid you not, this.
Nature's choices: Exploding the myths surrounding how and why we select our research papers.
You might almost think they had read my recent post on the business of spin. Apparently they think they are taking some unfair knocks about their process for publishing papers and want to spin the story back around to their liking. Fair enough.
I don't buy their argument so I'll take my hand at spinning it back my way.
Now, what are these evil myths you might ask? They have a list.
One myth that never seems to die is that Nature's editors seek to inflate the journal's impact factor by sifting through submitted papers (some 16,000 last year) in search of those that promise a high citation rate. We don't.
Right. Sure you don't. You don't have to* "sift through submitted papers" to satisfy your shared interest with other GlamourPubs. Every bloody thing you do is directed at this goal! From defining the acceptable article as that which is the latest and greatest, to prioritizing first over best, to schmoozing PIs, fields and techniques which have a recent track record of astronomical citations, to engaging in co-publication shenanigans.....yeah. GlamourMags, of which Nature is unmistakably one, all seek to inflate their impact factor. Of course nobody can lay a glove on them with an accusation of a specific practice like sitting around the editorial table writing down an expected citation value. It doesn't work like that. This doesn't mean that the substance of the critique isn't valid though. Doesn't mean that editors aren't steeped through and through in policies and decision making that are designed to select papers for publication which will be cited frequently. So no, sorry, your protestations aren't going to do much to dispel the 'myth' that your journal is motivated to publish papers which you expect will result in high citation numbers.
but wait, they have data which will blow this idea clean out of the water...
Indeed, the papers we publish with citations in the tens greatly outnumber those in the 100s, although it is the latter that dominate our impact factor. We are proud of our full spectrum.
HAHAHA! So because you can't predict perfectly this is evidence that you aren't making the predictions in the first place? Or that since you have a broad mission across fields with different citations practices / rates / timelines and because "tens" might be the very top of some fields we're supposed to believe you don't know this? And oh, just btw, how hard do you work to make public the entire distribution of citations to show us just how proud you are of your "full spectrum"?
On to the next howler.
Another long-standing myth is that we allow one negative referee to determine the rejection of a paper. On the contrary, there were several occasions last year when all the referees were underwhelmed by a paper, yet we published it on the basis of our own estimation of its worth. ... But we make the final call on the basis of criteria such as the paper's depth of mechanistic insight, or its value as a data resource or in enabling applications of an innovative technique.
I think this is a straw argument and a straw defense. Of course any disgruntled rejected author is going to view one negative and two seemingly positive reviews as evidence the editors should have accepted their paper! This happens all up and down the taxonomy of scientific journals. The only knock would be whether Nature does this at a different (higher) rate than other journals. This defense does not address the real issue, instead rebutting the easy scenario of "we did overturn a negative review that once so there". Pfagh. [As always, I'll leave it as an exercise for the reader to determine if arguing for definitive proof based on single examples, instead of plural data, is discordant with what Nature seems to consider good science *cough*representative gel*cough*error bars*cough*cough*]. How often do you do it? How often do real journals do it?
Considering this issue we come to the next obvious question which is whether some negative reviews are more equal than others. This may be getting closer to the truth of this particular myth. After all, the Nature
editors spend several weeks a year in scientific meetings and labs
so when they assert
our decisions are not influenced by the identity or location of any author... we commonly reject papers whose authors happen to include distinguished or 'hot' scientists...another myth is that we rely on a small number of privileged referees in any given discipline. In fact, we used nearly 5,400 referees last year,
it does not really tell us anything about what happens with close calls or highly contested reviews. Is the cadre of hot scientists listened to more regularly on those cases? What happens when a paper is initially rejected (or gets a devastating request for unending additional data)? Do they take the phone calls of the distinguished scientists more frequently? Are you more likely to accept their revisions or resubmissions? And in any case, what a tired defense in this continued mode of throwing up irrelevant chaff. Who cares that you used 5,400 referees. Who cares that you occasionally reject the papers of a Nobel Laureate or your top Ten cited PIs ever? The question is whether on the whole, statistically, considering your entire publication approach...is there an edge? 10%? 20%? 50% improvement in acceptance rate? Personally I doubt they even bother to look at their data and they sure haven't presented even a taste of a real analysis here...
Myths about journals will continue to proliferate.
Yes they will. Including the myths about the "best" science perpetuated in your own interest. As with all myths, there is often a lot of truth behind them. It can take more than a trite denial of the "nuh-uh" variety to deflate the ones that are untrue and actively harmful. I appreciate they feel a low cost, low effort martyred denial helps with the spin game. It sure doesn't get down into the truth of the matter though.
Since I'm all about productive suggestions, let me end by reacting to the final statement:
We can only attempt to ensure that the processes characterized above remain as robust and objective as possible, in our perpetual quest to deliver to our readers the best science that we can muster.
Objective? Okay, I can help out there.
A, Numero Uno- stop with the interpersonal interactions about manuscripts. No informal email or phone inquiries about editorial interest from the authors and no soliciting papers on the part of editors. No more conference schmoozing or spending time in labs. No arguing on the phone or in email for reconsideration after rejection. Everything done by the book of the submission process which is open to all.
Second- look at the reviewer data. How many reviews are coming from the same lab group or reviewer? In mixed-review conflict situations, whose opinion coincides with the editorial decision? How often is it the junior, unknown, third world institution scientist out of these fabled 5,400 referees and how many times is it the "distinguished or hot scientist" that sways the editor?
Third- get real about subtle misconduct timelines and stop pretending you don't know anything about it. You *know* when you've sent something to Group A to review, they sit on it, ask for more experiments and surprise, surprise, have a hurried draft to you juuuuuuust in time for you to decide co-publish. Or even if this stuff goes on between GlamourMagz...would it really kill you to pick up the phone and call your counterpart once a raging author has leveled all sorts of accusations? Wouldn't you all want to tamp down that whole rush-to-first competitive stuff? Since, you know, you are after the best possible science and not just trying to beat your GlamourMag competitors to the get.
Fourth- you know, maybe it IS time to get real about blinded review. I tend to be skeptical because I think the radical restructuring of the typical scientific paper that would be required for blinded review, i.e., totally de-citing it, would break something important. That link to other scientific work. But I thought to my self, "Self, you know the manuscript submitted to a GlamourMag already doesn't look very close to the published article (in startling contrast to real journals), does it? So heck, just go ahead and de-cite that puppy for the review and we can put that back in at some later stage, after the heavy lifting of review is complete".
Let me end on one final counter-spin.
The supposed "myths" of the process of getting published in a GlamourMag journal could be just sour grapes from those authors who have been rejected or from those subdisciplines that will never publish in such journals. Could be. I think that is what this Nature editorial would like you to believe and to infer that these "myths" are totally unfounded. But what the spin meisters at GlamourMagz fail to remember is that science can be a very small and very well connected business at times. Laboratories that publish frequently in their pages tend to be very large, meaning a whole lot of people are right there at the primary point of interaction. People who have friends, neighbors, relatives and the like who are also in the sciences. Lots of trainees who go on to other labs. Or even jobs as Glamour Mag editors. Not all of these latter drink the KoolAde. Some of them tell tales out of school.
Most of these people don't have any particular reason to lie or even embellish all that much to their friends. OTOH, whomever is writing an official Editorial offering from one of the GlamourMagz does have a clear interest in.....well, let us just leave it at spin. So I'm going to need something a little better than your tone of self-righteous woundedness to get on your side, 'k?
* Because you use negotiation with ISI over what counts as citable matter and what doesn't instead!