The Journal of Neurophysiology is reporting an analysis of peer review outcome for a sample of manuscripts submitted for review in the first half of 2007. Major kudos to them for being concerned enough to conduct such a self-analysis.
The data set comprised 713 submissions. Of these, 7 were rejected by the Associate Editors without review and 12 were withdrawn by the authors following one or more rounds of review. At JN, Associate Editors make the final decisions in the review process; there is no mandatory consultation with the Chief Editor.
The data set consisted of the following entries for each manuscript: first author gender, last author gender, Associate Editor gender, referee gender (for each referee), first decision index, and final decision accept rate. Gender determination for authors and referees was confirmed by photographic web search. The small group of transgendered scientists known to us was scored according to their self-identified gender rather than their chromosomal sex. Of the 713 submissions, 13 were single-author papers and we scored those by entering the single author as both first and last.
Editor Linden was kind enough to send me a note about the editorial so I'm assuming he won't mind too much if I give away the punchline (which is behind a paywall, I think).
Of the 713 submissions in the data set, there were 191 submitted with women as first authors. These received a first decision score of 3.84 ± 0.05 (mean ± SE) and a final accept rate of 43.4%. The 522 submissions with men as first authors received similar evaluations: a first decision score of 3.83 ± 0.05 and a final accept rate of 42.5%. Submissions with women as last authors numbered 120 and they received a first decision score of 3.87 ± 0.10 and a final accept rate of 42.5%, whereas the 593 submissions with men as last authors had a first decision score of 3.82 ± 0.04 and a final accept rate of 42.8%. Not much difference to speak of.
Lane and Linden conclude with an invitation:
There are many possible ways to parse and analyze this data set. You may download it in either Excel spreadsheet or tab-delimited text form to perform your own analysis using the "Supplementary Data" link on-line. Furthermore, you are encouraged to comment on this editorial or post your own analysis of the data (statistical analyses, which we have avoided, are welcome) in a moderated forum using the "Submit a Response" feature on the on-line article page.
Nice. I'm happy to see an editorial team that cares to know their own performance on such measures. Presumably had differences been found they would have sought to improve for the future. One hopes that they are also able to put an easy tracking system in place, although I suppose the only way to do that is self-identification of gender during submission. And authors might be leery, even if they pointed to this editorial as their reason.
What think you? Would a check box for first and last author gender be a bit off-putting? How about if it had a little help link to tell you that they were keeping oversight of possible bias in review and acceptance? Perhaps if there was a way to keep it under cover until decisions had been made? (Hmm, that makes me wonder about the outcome when gender was readily detected from the names versus not?)