As we discussed in two prior posts, the Society for Neuroscience has driven forward an experiment in the peer review of neuroscience manuscripts. The Neuroscience Peer Review Consortium seeks to streamline the peer review process by re-using the initial set of reviews of a manuscript rejected at one journal when re-submitted to another journal that is in the Consortium.
Sounds great, doesn't it?
Our initial foray into this was motivated by some pondering Nature Neuroscience was going through in trying to decide whether to join the NPRC. Noah Gray launched a discussion amongst their editorial board and other key scientists over at their Action Potential blog on the topic of "confidential comments to the editor", an obvious consideration in the "transferred reviews" idea that motivated the formation of the Consortium. Well apparently Nature Neuroscience has decided to join up [print editorial]:
The NPRC reduces the overall reviewing workload of the community by allowing authors to continue the initial review process when their paper moves from one consortium journal to another, once the paper has been rejected or withdrawn from the first journal. This arrangement is similar to the manuscript transfer system that has been available within the Nature family of journals for almost a decade.
I do find myself congratulating NN for deciding to discourage confidential comments to editor from the reviewers:
Only comments to the authors are transferred to the receiving journal. Confidential comments to the editors are not passed along. Thus, to ensure transparency in the review process, both at Nature Neuroscience and at other journals after the paper has been transferred, we encourage referees to include all their concerns about the paper in comments to the authors. The small amount of extra time required to word the comments diplomatically for the authors should be more than counterbalanced by the resulting improvement in the peer review process.
This is a good thing. I've never understood why confidential comments to the editor are widespread, myself.
As I pointed out in comments during prior discussion [here, here, here] I do anticipate a few unintended consequences. My analysis rests on the premise that the Consortium contains a hierarchical collection of journals with (now) NN at the pinnacle, J. Neuroscience and to some degree Biological Psychiatry at the next rank and then a bunch of specialty or society journals at the foot. (I have not re-scrutinized the journal list closely so there may be a few more relevant ranks or a few more journals filling out the mid ranks. My points aren't really changed by such details.) My concerns have mostly to do with the fact that I think there will be a lot of one-direction travel of rejected manuscripts, i.e., downward in Impact Factor.
First pitfall I see is the trickle down of manuscripts conceived and formulated for the more GlamorMag style of NN, something that J. Neurosci has been moving toward already. For those of us who think this style of scientific communication is not a GoodThing and are happy that serious journals still exist in which data-heavy, rigorously controlled studies are prioritized, well trickle down of GlamorMag style manuscripts would be a BadThing.
Second pitfall lies in the monopolistic character of the Consortium. This concern depends on the empirical outcome of author decision making. Once rejected at, say, J. Neurosci will authors throw up their hands and dump it to a lower IF journal? When once they might have submitted to a journal more closely competitive with J. Neurosci which is now, unfortunately, not in the Consortium? If the dump option is selected more frequently, well, this is going to distort the status quo of the neuroscience journal rankings. Love or hate the current IF-based ranking system for determining the "quality" of your science, throwing some extra chaos into this system seems detrimental to me. Especially if it is not evaluated in this little "experiment".
Third pitfall is the behavior of the editors of the bottom tier journals. Will they start getting manuscripts submitted that they never before would have seen at their humble journal? Will the chance at a seemingly higher impact paper erode their prior adherence to the strictest standards of data "completeness" and appropriately designed and controlled studies?
As with my opinions on grant review, though it is true that I think my hypotheses are likely to be validated, there is a greater problem. Namely that factors in the process which I think are likely to be highly relevant are simply not discussed. Responses which brush aside concerns such as the above suggest that the relevant data will not be examined during the course of the experiment. This is the crux of the matter.
To go through this experiment focused entirely on the local analysis (by-manuscript) and to not consider all of the possible broader (journal level) ramifications seems an error to me.