Archive for the 'Scientific Publication' category

Careful with your manuscript edits, folks

via Twitter and retractionwatch, some hilarity that ended up in the published version of a paper*.

Although association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns both because of evidence from previous research and inconsistencies with a priori predictions.

Careful what sorts of editorial manuscript comments you let slip through, people.

*apparently the authors are trying to correct the record so it may not last in the official version at the journal.

9 responses so far

Small happy moments

I love it when the reviewers really GET the paper, what we are trying to do and why it is important!

10 responses so far

Once again, it is not pre-publication peer review that is your problem

The perennial discussion arose on the Twitts yesterday, probably sparked by renewed ranting from dear old @mbeisen.

For the uninitiated, a brief review of the components that go into the pre-publication approval of a scientific manuscript. In general, authors select a journal and submit a manuscript that they feel is ready for publication*. At this point an Editor (usually there is one Editor in Chief and a handful or so of sub-Editors usually referred to as Associate Editors; AEs) gives it a look and decides whether to 1) accept it immediately, 2) reject it immediately or 3) send it to peer scientists for review. The first option is exceptionally rare. The second option depends on both the obvious "fit" of the manuscript for the journal in terms of topic (e.g., "Mathematical Proof of Newton's Fifth and Sixth Laws" submitted to "Journal of Trangender Sociology") and some more-rarified considerations not immediately obvious to the home viewer.
Continue Reading »

22 responses so far

Datahound on productivity

This final figure from Datahound's post on K99/R00 recipients who have managed to win R01 funding is fascinating to me. This is a plot of individual investigators, matching their number of published papers against a weighted sum of publication. The weighting is for the number of authors on each paper as follows: "One way to correct for the influence of an increased number of authors on the number of publications is to weight each publication by 1/(number of authors) (as was suggested by a comment on Twitter). In this scenario, a paper with two authors would be worth 1/2 while a paper with 10 authors would be worth 1/10."

Doing this adjustment to the unadjusted authors/papers relationship tightens up the relationship from a correlation coefficient of 0.47 to 0.83.

Ultimately this post shows pretty emphatically that when you operate in a subfield or niche or laboratory that tends to publish papers with a lot of authors, you get more author credits. This even survives the diluting effect of dividing each paper by the number of authors on it. There are undoubtedly many implications.

I think the relationship tends to argue that increasing the author number is not a reflection of the so-called courtesy or guest authorships that seem to bother a lot of a people in science. If you get more papers produced, even when you divide by the number of authors on each paper, then this tends to suggest that authors are contributing additional science. The scatter plots even seem to show a fairly linear relationship so we can't argue that it tails off after some arbitrary cutoff of author numbers.

Another implication is for the purely personal. If we can generate more plots like this one across subfields or across PI characteristics (there may be something odd about the K99/R00 population of investigators for example), there may be a productivity line against which to compare ourselves. Do we (or the candidate) have more or fewer publications than would be predicted from the average number of authors? Does this suggest that you can identify slackers from larger labs (that happen to have a lot of pubs) and hard chargers from smaller labs (that have fewer total pubs, but excel against the expected value)?

15 responses so far

Pet Peeve: "the literature"

One precious tic of academic writing I implore you to avoid is " the literature".

"Novel contribution to the literature..."

"Unknown in the literature..."

"Fill a gap in the literature..."

You know what I mean.

The impression you create is that this is some silly self-referential game with only internal measures of importance.

Whether this is how you see science or not.....avoid creating this impression.

Talk about knowledge or understanding instead.

23 responses so far

SfN's new eNeuro journal will attempt double blind peer review

From the author guidelines:

eNeuro uses a double-blind review process, which means the identities of both the authors and reviewers are concealed throughout the review process. In order to facilitate this, authors must ensure their manuscripts are prepared in a way that does not reveal their identity.

And how do they plan to accomplish this feat?

Eliminate author names and contact information from anyplace in the paper. See Title page for more information.

Make sure to use the third person to refer to personal work e.g. replace any phrases like 'as we have shown before' with 'has been shown before (Anonymous, 2007)'

Make sure that the materials and methods section does not refer to personal work. Do not include statements such as “using the method described in (XXX, 2007).” See Materials and Methods for more information.

Ensure that figures do not contain any affiliation-related identifier.

Depersonalize the work by using anonymous text where necessary. Do not include statements such as “as we have reported before”.

Remove self-citations and citations to unpublished work.

Do not eliminate essential self-references or other references, but limit self-references only to papers that are relevant for those reviewing the submitted paper.

Remove references to funding sources

I will be fascinated to see what procedures they have in place to determine if the blinding is actually working.

Will reviewers asked for their top five guesses as to the identity of the group submitting the manuscript do better than chance?

Will identification depend on the fame and status (and productivity) of the group submitting the paper?

Will it correlate with relatedness of scientific expertise?

What fraction of authors are identified all the time versus never?

Somehow, I suspect the staff of eNeuro will not really be interested in testing their assumptions.

23 responses so far

There is no "filter problem" in science


It is your job as a scientist to read the literature, keep abreast of findings of interest and integrate this knowledge with your own work.

We have amazing tools for doing so that were not available in times past, everything gets fantastically better all the time.

If you are a PI you even have minions to help you! And colleagues! And manuscripts and grants to review which catch you up.

So I ask you, people who spout off about the "filter" problem.....

What IS the nature of this problem? How does it affect your working day?

Since most of you deploy this in the context of wanting fewer papers to be published in fewer is that better? What is supposed to disappear from your view?

The stuff that you happen not to be interested in?

32 responses so far

Repost: A nonpology for my Glamour hatred. And for PP.

I wrote this awhile ago. Seems worth reposting for new readers:

I really should apologize to my readers who get their feelings hurt when 1) I bash GlamourMag science and 2) CPP bashes society journal level science. I just couldn't figure out how to make it something other than a nonpology. So the nonpology version is, sorry dudes, sorry that your feelings are hurt if there is some implication that you are a trivial fame-chasing, probably data faking GlamourHound. also, if the ranting that I trigger from certain commenters has the effect of making you feel as though you are a trivial, meaningless speedbump who is wasting NIH dollars better spent on RealScientists who do RealGrandeWorkEleven. The fact is, CPP and I are in relatively comfortable situations compared with many of our readers. It is no secret that we have jobs and grant funding. Although it is true that both of us are not above making an exaggerated point for dramatic discussion-encouraging purposes, it is probably no surprise that we come from distinctly different points of view ForRealz on this particular issue. Speaking only for myself in this case, I've been around long enough and enjoyed enough of what I consider to be success in what I want to do as a scientist that it tends to insulate me against criticism. I get that this is not true for all of you. If my intent in raising these issues (i.e., to show that the dominant meme is not reflective of the only way to have a career) backfires for some of you, I do regret that.

One response so far

Scientific peer review is not broken, but your Glamour humping ways are

I have recently had a not-atypical publishing experience for me. Submitted a manuscript, got a set of comments back in about four weeks. Comments were informed, pointed a finger at some weak points in the paper but did not go all nonlinear about what else they'd like to see or whinge about mechanism or talk about how I could really increase the "likely impact". The AE gave a decision of minor revisions. We made them and resubmitted. The AE accepted the paper.


The manuscript had been previously rejected from somewhere else. And we'd revised the manuscript according to those prior comments as best we could. I assume that made the subsequent submission go smoother but it is not impossible we simply would have received major revisions for the original version.

Either way, the process went as I think it should.

This brings me around to the folks who think that peer review of manuscripts is irretrievably broken and needs to be replaced with something NEW!!!!11!!!.

Try working in the normal scientific world for awhile. Say, four years. Submit to regular journals edited by actual working peer scientists. ONLY. Submit to journals of pedestrian and/or unimpressive Impact Factor (that would be the 2-4 range from my frame of reference). Submit interesting stories- whether they are "complete" or "demonstrate mechanism" or any of that bullshit. Then submit the next part of the continuing story you are working on. Repeat.

Oh, and make sure to submit to journals that don't require any page charge. Don't worry, they exist.

Give your trainees plenty of opportunity to be first author. Give them lots of experience writing and allow them to put their own thoughts into the paper..after all, there will be many more of them to go around.

See how the process works. Evaluate the quality of review. Decide whether your science has been helped or hindered by doing this.

Then revisit your prior complaints about how peer review is broken.

And figure out just how many of them have more to do with your own Glamour humping ways than they do with anything about the structure of Editor managed peer-review of scientific manuscripts.

Also see Post-publication peer review and preprint fans

18 responses so far

A "Registered Report" with a guarantee of publication?

Mar 20 2014 Published by under Science Publication, Scientific Publication


From Drug and Alcohol Dependence:

Drug and Alcohol Dependence will now be offering a new submission format, Registered Reports, which offers authors the opportunity to have their research protocol reviewed before data collection begins, with acceptance of the protocol providing acceptance in principle of the eventual results, irrespective of the nature of the results.


and more specifically....

Manuscripts which comprise the introduction, hypotheses, methods, analysis plan (including a sample size justification, for example based on a power calculation) and pilot data if applicable can be submitted via this format, and will first be considered by Drug and Alcohol Dependence's Editor in Chief or one of its Associate Editors. Those Registered Report manuscripts considered of appropriate interest and value will be sent for peer review, and then either rejected or accepted in principle. Following acceptance in principle, the study can begin and the authors are expected to adhere to the procedures described in their initial submission. When data collection and analysis is complete, the authors are to submit their finalised full manuscript for final peer review. As long as the procedures originally described have been followed, and the results interpreted sensibly, the manuscript will be published, irrespective of the nature of the findings.


Yeah.... let me get right on that.



No responses yet

Older posts »