Nature is not at all serious about double blind peer review

Feb 25 2015 Published by under Science Publication, Scientific Publication

The announcement for the policy is here.

Before I get into this, it would be a good thing if the review of scientific manuscripts could be entirely blind. Meaning the authors do not know who is editing or reviewing their paper- the latter is almost always true already - and that the editors and reviewers do not know who the authors are.

The reason is simple. Acceptance of a manuscript for publication should be entirely on the basis of what is contained in that manuscript. It should rely in no way on the identity of the people submitting the manuscript. This is not true at present. The reputation and/or perceived power of the authors is hugely influential on what gets published in which journals. Particularly for what are perceived as the best or most elite journals. This is a fact.

The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted. Even more worrisome, science that is significantly flawed or wrong may get published because of author reputation when it would have otherwise been sent back for fixing of the flaws.

We should all be most interested in making science publication as excellent as possible.

Blinding of the peer review process is a decent way to minimize biases based on author identity, so it is a good thing.

My problem is that it cannot work, absent significant changes in the way academic publishing operates. Consequently, any attempts to conduct double-blinded review that does not address these significant issues is doomed to fail. And since anyone with half a brain can see the following concerns, if they argue this Nature initiative is a good idea then I submit to you that they are engaged in a highly cynical effort to direct attention away from certain things. Things that we might describe as the real problem.

Here are the issues I see with the proposed Nature experiment.
1) It doesn't blind their editors. Nature uses a professional editorial staff who decide whether to send a manuscript out for peer review or just to summarily reject it. They select reviewers, make interim decisions, decide whether to send subsequent revised versions to review, select new or old reviewers and decide, again, whether to accept the manuscript. These editors, being human, are subject to tremendous biases based on author identity. Their role in the process is so tremendously powerful that blinding the reviewers but not the editors to the author identity is likely to have only minimal effect.

2) This policy is opt-in. HA! This is absurd. The people who are powerful and thus expected to benefit from their identity will not opt in. They'd be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so.

3) The scientific manuscript as we currently know it is chock full of clues as to author identity. Even if you rigorously excluded "we previously reported..." statements and manged to even out the self-citations to a nonsuspicious level (no easy task on either account) there is still the issue of scientific interest. No matter what the topic, there is going to be a betting gradient for how likely different labs are to have produced the manuscript.

4) The Nature policy mentions no back checking on whether their blinding actually works. This is key, see above comment about the betting gradient. It is not sufficient to put formal de-identification in place. It is necessary to check with reviewers over the real practice of the policy to determine the extent to which blinding succeeds or fails. And you cannot simply brandish a less than 100% identification rate either. If the reviewer only thinks that the paper was written by Professor Smith, then the system is already lost. Because that reviewer is being affected by the aforementioned issues of reputation and power even if she is wrong about the authors. That's on the tactical, paper by paper front. In the longer haul, the more reputed labs are generally going to be more actively submitting to a given journal and thus the erroneous assumption will be more likely to accrue to them anyway.

So. We're left with a policy that can be put in place in a formal sense. Nature can claim that they have conducted "double blind" review of manuscripts.

They will not be able to show that review is truly blinded. More critically they will not able to show that author reputational bias has been significantly decoupled from the entire process, given the huge input from their editorial staff.

So anything that they conclude from this will be baseless. And therefore highly counterproductive to the overall mission.

49 responses so far

  • Ryan says:

    It would be amusing/curious if people started actively posturing to be mis-identified as a big name lab ("self" citing those papers, intentionally defaulting to protocols from those labs to cite them in the methods section). There'd be fun parallels to signaling in ecology.

    Of course, with opt-in (or effective opt-in, if you allow people to say "as we previously reported" to signal "yes, we really are big-name lab") it seems unlikely, but you could imagine a few people sneaking in that way.

    For the same reason, it also seems like double-blind might help preferentially help junior faculty coming from the big-name labs.

  • chemicalbilology says:

    This was my thought too. It seems harder to even get sent out by the editors for review at these journals than it is to get through the review itself. And an "opt-in" policy is never going to be used by those who would benefit from the status quo. So nothing will change, so what's the point?

  • K99er says:

    So it's not perfect, but isn't it slightly better than what is in place now? This policy would actually be fantastic for me if I had a Nature level paper to submit right now. The reviewers would almost certainly assume it's from my mentor's lab, and give me the benefit of her prestige.

  • jmz4 says:

    It seems like all these half-hearted attempts to fix the academic journal system are just rearranging chairs on the Titanic. Maybe instead of ploys like this, they should put some thought into why the paper is an outmoded form of scientific communication and what they can do to help transition to a form that eliminates the worst features of it while preserving the best.

  • "The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted."

    Another problem is that less famous people have to spend more time rewriting/reworking manuscripts than the Bigfoots.

  • drugmonkey says:

    And get scooped in the mean time MtM?

  • Spike Lee says:

    A question for you frequent reviewers out there: When asked to review a paper that is squarely in your area of expertise, how often is it work that you are already familiar with from posters, talks, chatting at the bar, etc?

    Another obstacle to double blind review might be that the potential reviewers would -- should -- be in touch with their field enough to guess the identity of the author, even without major clues like those mentioned in point #3 above.

  • Established PI says:

    While I don't doubt some of the good intentions behind this new system, it is more a public relations move than anything else and an attempt by Nature to deflect criticism about their editorial process. I would like to think that junior investigators could potentially benefit from a process that removes attention from who is submitting the paper, but I am exceedingly pessimistic.

  • "And get scooped in the mean time MtM?"

    I would never imply such a cynical state of affairs.

  • drugmonkey says:

    an attempt by Nature to deflect criticism about their editorial process

    Of course that is what is really going on here. Of course.

  • ESI says:

    This could be very useful if instead Nature were to randomly assign one reviewer per manuscript to be blinded, and then retrospectively analyze the results to determine how author identity actually influences reviewer comments. To my knowledge this data does not exist...

  • Busy says:

    Like K99er says, DM is guilty of a common fallacy when alternatives are proposed: judging them against the perfect (possibly even non-existent) solution instead of against the present state of affairs.

    I'm aware of at least one venue which went blind with all the defects above except voluntary opt-in which is truly ridiculous. The end result was a significant change on who the "big labs" were. Before the introduction of double blind reviewing the dominant places were storied ivy-league types. After the change, many young and upcoming labs shot right to the top, while old labs saw a significant reduction in their publication rate.

  • becca says:

    Bottom line- I'm glad they are doing this. If they see an effect under these imperfect conditions, we have to scrap the current system on mass and figure out how to get people to take off their shoes and step behind the screen (reference to orchestra auditions). If they don't see an effect, they need a bigger intervention before they look for a new target.

  • dsks says:

    "The people who are powerful and thus expected to benefit from their identity will not opt in. They'd be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so."

    Yeah, not good.

    OTOH, if Nature is serious in its goal here (but just being cautious about it) then it could at least keep an empirical track of which submitting PIs tend to opt-in versus those who opt-out and report any correlations with laboratory/institutional prestige (no names need be named). That, in and of itself, would make for interesting data, imho, and potentially skewer the BSD argument that they're work gets published purely on the basis of its awesomeness alone (then again, it might not, who knows).

  • Catherine says:

    Commenting as someone who works as an editor for a Nature journal, but not speaking on behalf of NPG, I'd like to say:
    1. It is not our practice to allow big shots to publish with us just because of their names/reputations/past awesomeness. That is a fact. Obviously I can't speak for all editors at all journals, most of whom I don't know at all, but surely if there are professional editors doing this, they are in equal proportions to those of academic editors (actually, I would suspect less, since we're decoupled from most of the negative consequences of rejecting said big-shot's paper).
    2. I agree this isn't the answer to all possible failings of the peer-review/publication process. But isn't trying something better than doing nothing?
    3. Having heard in person from many people around the world how much they would like to have their paper reviewed without the referees knowing who they are, I have been waiting for several years to be able to try this. From my perspective, this is fully editorially driven and not a PR move at all.
    4. Thanks for the ideas about ways we could try to measure success here - definitely something we've already been pondering in my office.

    Catherine Goodman

  • drugmonkey says:

    It is not our practice to allow big shots to publish with us just because of their names/reputations/past awesomeness. That is a fact.

    No, it isn't. It is a lie. At the very best a convenient delusion.

    Of *course* if you are emphasizing "just because" then you are trivially correct. Nobody is suggesting it isn't all pretty damn impressive science, not at all. So yes, the work isn't being accepted in Nature ONLY because it has a big name attached to it.

    The charge that is being leveled is that being a big frigging deal scientist greases the skids for *equivalent* science. Those of us who have been in direct contact with the process in various ways see it all the damn time. I get this evidence from so many sources including from the inside, i.e., people who worked in GlamourMag offices, and the outside, i.e., labs that publish continually in GlamourMags (including yours) over such a long period of time that I cannot credit your denials. At all.

    The entire system you have is biased. You select your reviewers from the ranks of the already-published. Your editors hobknob with BigDeal labs at meetings and even on local visits. It alters who your editors take calls from, which complaints about reviewer mistreatment they listen to. It alters who your editors listen to about which Emperor's new clothes are really special. Etc. There are many thumbs on the scale. And they all benefit the BigDeal researcher at the expense of the unknown person.

    If you are so confident of your assertion in this context, then why not get rid of the opt-in part of the blinded-review plan? Just make it all anonymous-author and be done with it.

    But isn't trying something better than doing nothing?
    Nope. It is worse than doing nothing if this ridiculous experiment gets to the end and the answer is "look, no difference in outcome". Which is inevitable, given your opt-in and refusal to blind the editors even if you manage to deidentify manuscripts.

    definitely something we've already been pondering in my office.
    There are people in experimental psychology disciplines that specialize in studying blinded human decision making and evaluating the success of such attempts. I recommend you go find a few of them and do this thing right.

  • Catherine says:

    I solicit reviews from people I know and people I don't; people who've published with us and people who haven't. Basically, I read the literature and find people working in the field/using the necessary techniques/model animals/etc. It's not complex. I take calls from anyone who wants to talk to me. I currently have a visit scheduled with someone I met last month at a conference who is a first year assistant professor. You can say these are lies if you want, but it does not make them so.

    I am not in charge of how NPG runs this experiment; I am just glad we are trying it.

  • drugmonkey says:

    You can say you are being honest and have rigorously excluded any possible source of bias and that the acceptances you hand out are entirely uncontaminated by author reputation...

    ....but it does not make it so.

  • drugmonkey says:

    A blast from the past. "A Former Nature Editor" wrote on a prior post.

    We have so much more GOOD influence over the way that science comes out in the end than you will ever know. We pick the referees who we know will request specific experiments or steer the paper/authors in a good direction. Then we get feedback that the paper and field were so much more improved as a result. In some cases we work with authors for YEARS before papers are submitted to suggest experiments and lines of thought. The difference between editors at this level and PI's essentially comes down to the fact that we do the same work behind the scenes and don't need to jump up and down taking credit for it.

    This still makes me chuckle.

  • rxnm says:

    First, if there is bias in favor of BSDs at glam journals, it certainly comes from the editorial level. The editors cull far more papers than reviewers do...there were stats somewhere that the majority (60%) of papers the go for review at Nature are eventually accepted, but elsewhere it says only 8% of submissions are eventually published.

    With STAP, we saw this first hand. Reviews ranged from skeptical to incredulous, but these concerns were ignored...by editors. We all do this mental adjustment almost automatically: when a young PI gets a paper in say Nature Neuro it's probably a total home run, when the neighborhood BSD does it's probably a mess. (Similar to how a high GPA from a private prep school basically means dad paid the bills on time, but is notable from a big public school honors program.)

    Next, any trainee who has moved between BSD/ILAF/prestige labs or institutions can speak directly to the enormous difference in how initial paper submissions are handled.

    Third, I have seen--with mine own eyes--emails from glam editors (not at Nature) soliciting work from BSDs and all but promising a "smooth" review process and using chummy "let's work together on this" language.

    While that is blatantly unethical and editors and PIs who collude in this way are shitbags, I generally don't think professional editors do this or that they are bad, venal people. I don't think they are swayed by the business or advertising wings of their publishers. I think they are, like everyone else, influenced on conscious and subconscious levels by fame and prestige. It would be fucking bizarre if they weren't.

  • rxnm says:

    meant to say "moved between BSD/ILAF/prestige labs or institutions and less influential places"

  • drugmonkey says:

    I generally don't think professional editors do this ..

    did you mean to add "consciously" or "on purpose"?

  • dsks says:

    "But isn't trying something better than doing nothing?"

    If it's seriously worth trying, then why on earth make it opt-in? It's not as if Nature would be the first journal to have mandatory blind* review. And why not blind the editors also? Surely you can see that once you accept that there might be bias in academic peer review then, short of imagining that journal editors are of a purer spirit than the rest of us, you must also accept that you and your colleagues are likely just as vulnerable to bias also?

    * well, blind as one can get, at least, marking the caveats illustrated above

  • dsks says:

    "We have so much more GOOD influence over the way that science comes out in the end than you will ever know. We pick the referees who we know will request specific experiments or steer the paper/authors in a good direction. Then we get feedback that the paper and field were so much more improved as a result. In some cases we work with authors for YEARS before papers are submitted to suggest experiments and lines of thought. The difference between editors at this level and PI's essentially comes down to the fact that we do the same work behind the scenes and don't need to jump up and down taking credit for it."

    Oh.

    Man.

  • Catherine says:

    If it's seriously worth trying, then why on earth make it opt-in? ... And why not blind the editors also?

    Great questions. First, as I said, I am not at all in charge of how we're implementing this, but here are a few thoughts: 1) Are we appropriately staffed to take on the extra work of blinding papers to ourselves and do we have appropriate technical resources to prevent editors from seeing authors' names while still allowing them to do their work? Not at this point. 2) Will authors still submit to us if they have to write their papers in a totally different way? Not clear. 3) Will referees have any hesitation in reviewing papers that are double blinded? Not clear. 4) How can we evaluate if the system is working, when individual referees come from different scientific perspectives and/or are naturally more or less positive, so comparisons across reports are almost meaningless? 5) Are there downsides we might not be aware of, even though two of the Nature journals have been trialing this model for some time? Sure, probably. I'm sure others can and have thought up more potential variables in this experiment.

    I don't claim that people who become scientific editors are magical robot unicorns who love everyone equally and have never had any cultural experiences that would leave them with unconscious (or conscious) bias. But I imagine you would also be surprised how much time we spend trying to eliminate bias in our assessments/discussions, frequently raising questions as to whether overly positive or negative comments (from other editors or referees, and vice versa - when authors' responses to referees make bad assumptions about referee expertise) might reflect bias (in all ways - big shot vs. new prof, gender, geography, etc etc.). People on the outside also don't know how many papers we reject from so-called big shots, and might tend to overlook papers we publish from 'non-big shots', so just saying 'we published ## papers from big shots' is kind of meaningless.

  • DrugMonkey says:

    2) Imma go waaaaay out on a limb here and suggest authors will put up with whatever is required to get their papers considered for Nature.

  • DJMH says:

    I guess for me the easiest way to assess this experiment is, who would complain if blinding were made mandatory, not opt-in? To me, it doesn't take any special insight to respond "Big shots."

    Therefore, the opt-in nature of this set-up favors big shots. I'm curious to hear from anyone who answers the first question differently.

  • Busy says:

    I used to be indifferent to double blind reviewing, since I didn't think the effect was that big.

    Then one sunny day the topic came up in front of a gaggle of BSDs and they were all vehemently opposed. This convinced me that they were greatly benefiting from the status quo and I've been a strong advocate of DBR since then.

  • E rook says:

    Having read job ads and been through the screening process for a professional editor gig at a glam publisher (not the flagship), I got the impression that seeking out submissions for the best work was part of the job.

  • Established PI says:

    Catherine, thanks for your willingness to respond to some of the comments on this blog. I think that the question of bias towards established investigators is not aimed solely at editors; the reviewers are the primary issue, which is what Nature is trying to get at with blinded reviews. The problem is that, by making the system optional, you create a tw0-tiered system whose relative success rates are uncertain. My guess is that relatively unknown investigators will use the blinded track while established PIs won't want to gamble and will hope that name recogniation will give them that extra boost. The blinded track will quickly become identified internally with the untested or less successful investigator. As hard as the editors may strive to ignore that, it is likely to play into decisions.

  • rxnm says:

    "I got the impression that seeking out submissions for the best work was part of the job."

    That is very different from greasing the wheels of peer review for prestige-based in-group networks of publishers, scientists, and institutions.

  • Venkat says:

    One of the other seemingly valid criticisms of the double blind system I have heard so far is that it would preclude reviewers from using their prior knowledge about the authors in judging their competence with regards to the methods used in the paper.

  • Alfred Wallace says:

    I completely agree that having the editors blinded would probably affect the advantage of the big guys most.

    However, in the comments section of the Nature piece where they were announcing their new double-blind policy, a established scientist and reviewer (presumably climate scientist) chimed in and made an argument against blinding the reviewers:

    " I write here from the perspective of a very frequent reviewer of papers in my field - reviewing is a huge burden on the community, and should be made as easy as possible...

    ...But there is a more fundamental question about whether knowledge of the research groups involved is a necessary part of the assessment of a paper. The commenters on here obviously think not. However, I know from 30 years of experience that some labs have mastered a particular analysis, while others struggle with it. I am therefore going to ask different questions of a new lab purporting to carry out an analysis than I would of the established ones. Similarly I know that some labs understand the foibles and caveats attached to particular climate models, while new entrants may not be aware of them, and may unwittingly make simple errors. I therefore need to ask questions of the latter that would simply be patrionising to the former..."

    Of course, he is actually making a great case why having reviewers blinded is a good thing.

  • qaz says:

    If the reviewers are the problem, then it should be similarly easier for BSDs to publish weak work in society journals than for unknowns to do so. That does not seem to be the case.

    In my experience, the reason that it is easier for BSDs to publish in GlamourMags is because (1) they are more likely to realize they need to call the editors to answer "rejections", and (2) the editors are more likely take their calls. It is certainly only anecdata, but my experience is that the more established a relationship a PI has with an editor, the more likely the editor is to give the PI a chance to respond to criticism. I do not think reviewers are less likely to criticize a BSD than an unknown.

    Reviewers of journal papers (as in study section) are more likely to give an established lab a pass on a technique that they trust the lab on. ("I know they didn't give all the details of how they did [will do] technique Y, but they know how to do that technique. I'm sure they did it [will do it] right.") But in my experience, this is more about what techniques are known to be established in a lab than about the BSDness of the PI.

  • dsks says:

    "One of the other seemingly valid criticisms of the double blind system I have heard so far is that it would preclude reviewers from using their prior knowledge about the authors in judging their competence with regards to the methods used in the paper."

    That's not a criticism Venkat. It is absolutely not a good thing to encourage reviewers to review manuscripts based on something as soft and subjective as their a priori opinion of this or that lab's competence or trustworthiness. It's exactly this sort of judgement that is extremely vulnerable to biases, conscious and otherwise, that are not necessarily related to competence at all, and it is a judgment for which the submitting PIs cannot defend themselves. The scientist Alfred Wallace quotes above should be ashamed of themselves if they have been holding junior scientists to a higher standard simply based on their "feeling" that these whippersnappers can't possibly be as well versed in the necessary techniques as the oldies (although, God knows where the idiot thinks these "new entrants" learned those techniques if not from working in some established lab where, incidentally, they probably did the vast majority of the hands on bench work in the first place!).

    The task of the reviewer is to evaluate the research as presented in the manuscript. If the charge of incompetence or inappropriate experimental design is to be leveled, there should be evidence of it within the manuscript or accompanying material. If fraud is suspected, again evidence must be presented, to the editor if not the authors themselves.

    It is outrageous for any scientist to believe they have a right to affect the likelihood of a colleagues publication success simply on the basis of feelings and vague anecdotes. One suspects we all know this, because I've never yet seen a review in which a reviewer includes in their comments to the authors something like, "Well, their methods as described are certainly appropriate, but I just can't bring myself to believe that such a junior laboratory could have conducted them with all the necessarily pitfalls and caveats in mind. I tdon't think their data can be trusted."

    If such a statement is generally not considered appropriate in a reviewer's comments to the authors (as opposed to a shifty secret email to the editor), then the sentiment it expresses has no business determining the success or failure of publication, imho.

  • Busy says:

    >> I do not think reviewers are less likely to criticize a BSD than an unknown.

    Oh, I've seen this many times. One is less likely to question a person who gets things right time and time again than some unknown researcher. From a Bayesian statistical perspective this is the right thing to do. However from a fairness, judge-the-ideas-not-the-person perspective is the wrong thing to do.

  • Catherine says:

    @Established PI – you’re right, that’s certainly a danger of the opt-in system, and if there’s enough (not sure how the decision makers will quantify that) uptake by authors and referees, then hopefully we’ll switch over at some point to a fully mandatory system. I don’t know whether that’s already in discussion as an obvious next step or we’re really waiting to see what happens. In the meantime, the community can have a say in this – the more people talk about ‘we must do this, it’s the best way, etc.’ in department meetings and conferences, the more peer pressure there is for everyone to take part. Referees can comment to the authors who have not blinded their paper that they should consider it for next time. Big shots who are already in favor of exploring other peer review options/bucking established methods (or less established people who don’t feel nervous about this) should challenge other big shots to go double-blind and make it clear to people that they’re not just skating by on their name. The more it becomes part of the culture, the easier it will be to make the switch (and then perhaps not just at NPG but also elsewhere).

  • drugmonkey says:

    Given the position Nature enjoys in the ecosphere of science publishing, I am having difficulty imagining what stops it from doing whatever the heck it wants wrt peer review. So clearly the hesitation is from internal motovations not external ones.

  • Busy says:

    Yes, they had the right idea but chickened out.

  • DJMH says:

    Big shots ...should challenge other big shots to go double-blind and make it clear to people that they’re not just skating by on their name.

    I am 99% sure that big shots believe their papers get in because QUALITY, not because they're skating by on their name, so I doubt this is going to rally the generals.

  • Busy says:

    @DJMH

    Of course, just like every one of my accepted papers was because the reviewers were incredibly wise and every one of my rejected papers were because the reviewer is an idiot. This is just human nature.

    This is why we need to build checks and balances into the system such as DBR. Time ant time again it has been shown that we let conscious and subconscious biases creep into our judgments, so the system must account for them.

    Human foibles are, for one, the reason behind 2 or 3 reviewers per paper. The error rate is just to high to leave the decision to one person alone.

  • jmz4 says:

    "...that [soliciting and easing BSD manuscripts] is blatantly unethical and editors and PIs who collude in this way are shitbags.."
    -It really isn't. Nature is one journal of many, and they can print whatever papers they want. They don't owe you anything. We're the idiots that have elevated them to their ludicrous importance in terms of career advancement and funding because we have no good metrics for quality or impact of a given piece of science.
    If they're trying to improve their in-house mechanism for reducing biases, that's fine an noble even if, as DM points out, it isn't likely to work. If they're doing it to deflect criticism, well hey, that's just another dirtbag publisher doing dirtbag things. The problem is that we feel we need to criticize them instead of, you know, just not using them.

  • rxnm says:

    jmz4 I both totally agree with you about the "we're idiots" and "NPG can publish whatever they want" parts, but that's not inconsistent with me having a low opinion of glam editors and PIs who collude. And it is unethical because they--obviously--pretend otherwise.

  • […] immigration, health official says How to Develop New Antibiotics The Tangled Roots of English Nature is not at all serious about double blind peer review More than measles: The threat to America’s ‘herd immunity’ Following up on a call for reform […]

  • becca says:

    Ok guys. I love you all and everything but this high and mighty "IF there is bias, it comes from *that guy over there*" is malarkey. Pernicious malarkey.

    Editors, professional and otherwise, are biased. Referees are biased. PIs evaluating their own labs work are biased, which is why we have the system in the first place.

    Obviously, if it's true that editors refuse more articles than reviewers, reducing editorial biases will play a bigger role than reducing referee biases (unless editors tend to be less malleable than reviewers for some other reason).

    Catherine-
    "1) Are we appropriately staffed to take on the extra work of blinding papers to ourselves and do we have appropriate technical resources to prevent editors from seeing authors' names while still allowing them to do their work?"
    Can you guess what a lawyer would say about "are we appropriately staffed to blind this information" if I were a hospital and tried to wriggle around HIPAA? If your technical system is so primitive, how can people really be sure that their unpublished work is kept confidential and that other competitor labs can't hack in?

    "2) Will authors still submit to us if they have to write their papers in a totally different way?"
    LOL. As long as job offers hang on publishing with you? You're kidding, right? I'll bet you a doughnut you could insist everyone put an Arbitrary Llama photo in every article (and pay for color charges!) and you'd still have ample quality science to fill the pages. The *interesting* experiment would be to see if doing so would have any negative on your Impact Factor. Given the internet should increase the meme-ability of Arbitrary Lllama papers, I'd actually speculate it'd go up.

    "3) Will referees have any hesitation in reviewing papers that are double blinded?"
    Maybe the referees that don't like it should be noted for other reasons.

    "4) How can we evaluate if the system is working, when individual referees come from different scientific perspectives and/or are naturally more or less positive, so comparisons across reports are almost meaningless?"
    Uhm, give each editor some papers blinded and some papers not blinded? Or just compare their historical record with record once you blind the system?

    "5) Are there downsides we might not be aware of, even though two of the Nature journals have been trialing this model for some time? Sure, probably. I'm sure others can and have thought up more potential variables in this experiment."
    Experiments have control groups. Opt-in is not a good experimental design.

    Heck, I don't even think you need to go so far as to blind referees or editors. I bet you could see an impact simply by pasting this website on top of every article before editors or referees read anything: http://en.wikipedia.org/wiki/Familiarity_heuristic

  • Catherine says:

    @becca,

    Please see above re my comment that we are not magical robot unicorns. Of course all humans have bias.

    Can you guess what a lawyer would say about "are we appropriately staffed to blind this information" if I were a hospital and tried to wriggle around HIPAA?

    This is a false comparison. Our journals were set up, and editorial teams/assistants/production teams, etc., hired, on the basis of the needs for single-blind peer review. If we were to switch to fully mandatory double-blind, and editorially blind, we would have extra work to do (checking that manuscripts are appropriately blinded, which could not be done by the main editor because if the manuscript was not blinded, they will know who it is; creating lists of potential referees that could potentially review the paper if they aren’t authors or collaborators/past students/advisors/close friends of authors – we obviously check all those things now but don’t have to create a formal document to pass off to second party; second party to enlist referees (you can’t pass the edited list back to the original editor because that could unblind the authors) who knows not only who authors are but all the other categories above, some of which are not easy to track down). So when I say we are not appropriately staffed, what I mean is that we do not have enough manpower to keep a more complicated system running at the same speed as the current system [insert inevitable complaints about the system being outrageously slow already].

    I'll bet you a doughnut you could insist everyone put an Arbitrary Llama photo in every article (and pay for color charges!) and you'd still have ample quality science to fill the pages.

    I do like llamas.

    Maybe the referees that don't like it should be noted for other reasons.

    Certainly some of the people who refuse to review in a double-blind system will be doing it because they are lazy referees who only want to give the go-ahead to work from their other big-shot friends and don’t care about providing a careful, thoughtful analysis of the paper in front of them (and those people should be noted; indeed, many of them already probably have been noted because they provide obviously inferior reports in general). I can also imagine some people just not knowing what to expect, and feeling like learning a new system (even though only minimally different in practice) is too much, and not wanting to be bothered. Who knows?

    Uhm, give each editor some papers blinded and some papers not blinded? Or just compare their historical record with record once you blind the system?

    Sure, maybe. In practice, I presume we will look at the data analyzed 25 different ways. Identifying which of those is meaningful and not stochastic/reflecting other trends in the community is what’s tough.

    Opt-in is not a good experimental design.

    I guess this goes back to the original question of whether it’s better to do something than nothing, on which we may not agree. As an aside, it would be interesting to add a question to our submission page, right next to the ‘do you want to participate in double-blind peer review’ question, which would be ‘do you consider yourself a big shot?’ Then we could separately poll all scientists to create a list of all ‘big shots’ (I’m sure there would be magical llamas involved as pollsters) and see how the lists compare.

    I bet you could see an impact simply by pasting this website on top of every article before editors or referees read anything: http://en.wikipedia.org/wiki/Familiarity_heuristic

    I’ve been wondering if just taking the author names off the first page would also go a long way in helping people form an opinion about that specific work in isolation. Lots of things that could be tested!

  • Mike says:

    Nature and its sister journals are corrupting the scientific research, alas. I decided not to submit nor to cite any paper published in Nature.

  • […] is full of posts discussing various aspects of peer review/open access/…: here and here and here and here and in many other […]

Leave a Reply