There should be only three categories of review outcome.
Accept, Reject and Minor Revisions.
Part of the Editorial decision making will have to be whether the experiments demanded by the reviewers are reasonable as "minor" or not. I suggest a lean towards accepting only the most minimal demands for additional experimentation as "minor revisions" and otherwise to choose to reject.
And no more of this back and forth with Editors about what additional work might make it acceptable for the journal as a new submission either.
We are handing over too much power to direct and control the science to other people. It rightfully belongs within your lab and within your circle of key peers.
If J Neuro could take a stand against Supplemental Materials, they and other journals can take a stand on this.
I estimate that the greatest advantage will be the sharp decline in reviewers demanding extra work just because they can.
The second advantage will be with Editors themselves having to select from what is submitted to them, instead of trying to create new papers by holding acceptances at bay until the authors throw down another year of person-work.
I think I've touched on this before but I'm still seeking clarity.
How do you review?
For a given journal, let's imagine this time, that you sometimes get manuscripts rejected from and sometimes get acceptances.
Do you review manuscripts for that journal as you would like to be reviewed?
Or as you have perceived yourself to have been reviewed?
Do you review according to your own evolved wisdom or with an eye to what you perceive the Editorial staff of the journal desire?
Interesting comment from AnonNeuro:
Reviews are confidential, so I don't think you can share that information. Saying "I'll review it again" is the same as saying "I have insider knowledge that this paper was rejected elsewhere". Better to decline the review due to conflict.
I don't think I've ever followed this as a rule. I have definitely told editors when the manuscript has not been revised from a previously critiqued version in the past (I don't say which journal had rejected the authors' work). But I can't say that I invariably mention it either. If the manuscript had been revised somewhat, why bother. If I like it and want to see it published, mentioning I've seen a prior version elsewhere seems counterproductive.
This comment had me pondering my lack of a clear policy.
Maybe we should tell the editor upon accepting the review assignment so that they can decide if they still want our input?
This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.
There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.
It's the sort of thing you only need to do as a response to review when it has a chance of acceptance.
Outrageous claims that are going to be bait for any reviewer? Sure, back those down.
A piece in Vox summarizes a study from Nextions showing that lawyers are more critical of a brief written by an African-American.
I immediately thought of scientific manuscript review and the not-unusual request to have a revision "thoroughly edited by a native English speaker". My confirmation bias suggests that this is way more common when the first author has an apparently Asian surname.
It would be interesting to see a similar balanced test for scientific writing and review, wouldn't it?
My second thought was.... Ginther. Is this not another one of the thousand cuts contributing to African-American PIs' lower success rates and need to revise the proposal extra times? Seems as though it might be.
It's not ideal for your summary statement to show up whilst at a meeting attended by many of the people on the review panel.
I was just observing that I'd far rather my grants were reviewed by someone who had just received a new grant (or fundable score) than someone who had been denied a few times recently.
It strikes me that this may not be universal logic.
Is the disgruntled-applicant reviewer going to be sympathetic? Or will he do unto you as he has been done to?
Will the recently-awarded reviewer be in a generous mood? Or will she pull up the ladder?
Reminder for when you are submitting your manuscript to a dump journal.
Many of the people involved with what you consider to be a dump journal* for your work may not see it as quite so lowly a venue as you do.
This includes the AEs and reviewers, possibly the Editor in Chief as well.
Don't patronize them.
*again, this is descriptive and not pejorative in my use. A semi respectable place where you can get a less than perfect manuscript published without too much hassle**.
Probably one of the most hilarious comments I've ever received in review of one of my grants boiled down to this.
"Your colleagues have boatloads of grant money to work on Topic X. Why have you not produced more publications on Topic X with their resources? .....anyway, so therefore your new application on Topic Y sucks. "
[My recollection is that my productivity on Topic Y was mentioned by other reviewers as a strength if anything. If not on that particular proposal, than on other ones around the same time.]
PS Brookes has posted a spirited critique of an Op-Ed offered by Michael R. Blatt, EIC of Plant Physiology.
[Blatt] then adds this beauty…
“So, whatever the shortfalls of the peer-review process, I do not accept the argument that it is failing, that it is a threat to progress, or that, as scientists, we need to retake control of our profession. Indeed, if there is a threat to the scientific process, I would argue that, unchecked, the most serious is the brand of vigilante science currently facilitated by PubPeer.”
So let’s get this straight – the problems facing science today are not: (i) a lack of funding, (ii) rampant fakery, (iii) politicians seeking to defund things they don’t like, (iv) inadequate teaching of the scientific method in schools, (v) proliferation of the blood-sucking profiteering publishing industry, (vi) an obsession with impact factor and other outdated metrics, (vii) a broken training to job pipeline in academia, (viii) insert your favorite #scipocalypse cause here.
Go read the Editorial and then the takedown.
Continue Reading »