Archive for the 'Science Publication' category

Review unto others

I think I've touched on this before but I'm still seeking clarity.

How do you review?

For a given journal, let's imagine this time, that you sometimes get manuscripts rejected from and sometimes get acceptances.

Do you review manuscripts for that journal as you would like to be reviewed?

Or as you have perceived yourself to have been reviewed?

Do you review according to your own evolved wisdom or with an eye to what you perceive the Editorial staff of the journal desire?

31 responses so far

Is the fact you reviewed this manuscript before confidential?

Apr 15 2016 Published by under Peer Review, Science Publication

Interesting comment from AnonNeuro:

Reviews are confidential, so I don't think you can share that information. Saying "I'll review it again" is the same as saying "I have insider knowledge that this paper was rejected elsewhere". Better to decline the review due to conflict.

I don't think I've ever followed this as a rule. I have definitely told editors when the manuscript has not been revised from a previously critiqued version in the past (I don't say which journal had rejected the authors' work). But I can't say that I invariably mention it either. If the manuscript had been revised somewhat, why bother. If I like it and want to see it published, mentioning I've seen a prior version elsewhere seems counterproductive.

This comment had me pondering my lack of a clear policy.

Maybe we should tell the editor upon accepting the review assignment so that they can decide if they still want our input?

28 responses so far

Revise After Rejection

This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.

There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.

It's the sort of thing you only need to do as a response to review when it has a chance of acceptance.

Outrageous claims that are going to be bait for any reviewer? Sure, back those down.

17 responses so far

Strategic advice

Reminder for when you are submitting your manuscript to a dump journal.

Many of the people involved with what you consider to be a dump journal* for your work may not see it as quite so lowly a venue as you do.

This includes the AEs and reviewers, possibly the Editor in Chief as well. 

Don't patronize them. 
___

*again, this is descriptive and not pejorative in my use. A semi respectable place where you can get a less than perfect manuscript published without too much hassle**.

**you hope.

43 responses so far

A new way to publish your dataset #OA waccaloons!

Elsevier has a new ....journal? I guess that is what it is.

Data in Brief

From the author guidelines:

Data in Brief provides a way for researchers to easily share and reuse each other's datasets by
publishing data articles that:

Thoroughly describe your data, facilitating reproducibility. Make your data, which is often buried in supplementary material, easier to find. Increase traffic towards associated research articles and data, leading to more citations. Open up doors for new collaborations.
Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.

At the moment they only list Section Editors in Proteomics, Materials Science, Molecular Phylogenetics and Evolution, Engineering and Genomics. So yes, there will apparently be peer review of these datasets:

Because Data in Brief articles are pure descriptions of data they are reviewed differently than a typical research article. The Data in Brief peer review process focuses on data transparency.

Reviewers review manuscripts based on the following criteria:
Do the description and data make sense? Do the authors adequately explain its utility to the community? Are the protocol/references for generating data adequate? Data format (is it standard? potentially re-usable?) Does the article follow the Data in Brief template? Is the data well documented?

Data in Brief that are converted supplementary files submitted alongside a research article via another Elsevier journal are editorially reviewed....

Wait. What's this part now?

Here's what the guidelines at a regular journal, also published by Elsevier, have to say about the purpose of Data in Brief:

Authors have the option of converting any or all parts of their supplementary or additional raw data into one or multiple Data in Brief articles, a new kind of article that houses and describes their data. Data in Brief articles ensure that your data, which is normally buried in supplementary material, is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. Authors are encouraged to submit their Data in Brief article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your Data in Brief article will automatically be transferred over to Data in Brief where it will be editorially reviewed and published in the new, open access journal, Data in Brief. Please note an open access fee is payable for publication in Data in Brief.

emphasis added.

So, for those of you that want to publish the data underlying your regular research article, instead of having it go unheeded in a Supplementary Materials pdf you now have the opportunity to pay an Open Access fee to get yourself a DOI for it.

20 responses so far

PLoS Has Angered the PastramiMachine!!!!

@pastramimachine is WICKED PISSED!

from the page charges (aka OA fee or whatever you want to call it. $2250 at PLoS Genetics according to this guy, I think it is still around $1350 for non GlamourHounds.)

So what has him so angree?

Dude, do you have the slightest idea what people make in the private sector? At the executive level? Of a company with $40-50 Million in gross revenue?

The median total compensation package for CEOs totaled $378,000.

The report says that companies with $25-49.9M in annual revenue are at the median for CEO pay.

I understand that academics don't get paid as well as they might be but....surely you have SOME idea of what private sector jobs pay? And say, what do University Presidents pull down, anyway?

So what? Is there any business that fails to advertise itself? In the hopes of growing in size or at least maintaining current revenues? Where's the evidence this is excessive? Have you any idea what PLoS is up against in terms of the advertising budgets of NPG, Springer, Elsevier, etc?

This is ridiculous. It's like asking your home builder why he doesn't have a lumbering operation and saw mill out in Oregon or up in Saskatchewan or wheretfever the 2X4s come from. This Aries Systems Corp is the outfit that built the EditorialManager system used by several academic publishers. It's a service provider. Why would the publisher of a journal re-invent the wheel?

Lobbying activities to keep Elsevier from playing penny-ante shenanigans with Congress to totally obliterate the requirement to deposit manuscripts in Pub Med Central, perhaps? Or related efforts? Sure, PLoS lobbying activities may be mostly for them but it seems that having a wealthy organization opposing the pay journals works out well for the OA fans. How can you complain about this, guy?

Maybe I missed something? When did PLoS say they weren't a company? And heck, many not-for-profit entities have investment portfolios. Starting with your local University (say, Rutgers, for example). What do you think the "endowment" is? This is no crime. This is responsible stewardship of a business entity. Building up a cushion against future changes in the business climate. Smart work, PLoS! Somehow I don't think PastramiMachine would be too happy if PLoS went belly-up and all of the published papers disappeared because there was nothing to pay the server fees with!

This guy is delusional, mostly because of his stated belief that PLoS is some sort of capital-gee GoodGuy. That's on you, dude, not on PLoS.

__

More from Odyssey who picked out some replies from Michael Eisen.

81 responses so far

When reviewers can't relinquish their bone

Had an interesting category of thing happen on peer review of our work recently.

It was the species of reviewer objection where they know they can't lay a glove on you but they just can't stop themselves from asserting their disagreement. 

It was in several different contexts and the details differed. But the essence was the same. 

I'm just laughing.

I mean- why do we use language that identifies the weaknesses, limits or necessary caveats in our papers if it doesn't mean anything?

Saying "...and then there is this other possible interpretation" apparently enrages some reviewers that this possibility is not seen as a reason to prevent us from publishing the data. 

Pointing out that these papers over here support one view of accepted interpretations/practices/understanding can trigger outrage that you don't ignore those in favor of these other papers over there and their way of doing things. 

Identifying clearly and carefully why you made certain choices generates the most hilariously twisted "objective critiques" that really boil down to "Well I use these other models which are better for some reason I can't articulate."

Do you even scholarship, bro?

I mostly chuckle and move on, but these experiences do tie into Mike Eisen's current fevers about "publishing" manuscripts prior to peer review. So I do have sympathy for his position. It is annoying when such reviewer intransigence over non-universal interpretations is used to prevent publication of data. And it would sometimes be funny to have the "Your caveats aren't caveatty enough" discussion in public.

9 responses so far

Blooding the trainees

In that most English of pastimes, fox hunting, the noobs are smeared about the face with the blood of the poor unfortunate fox after dismembering by hound has been achieved.

I surmise the goal is to get the noob used to the less palatable aspects of their chosen sporting endeavor. 

Anyway, speaking of manuscript review and eventual publication, do you plan a course for new trainees in the lab?

I'm wondering if you have any explicit goals for them- Should a mentor try to get new postdocs or grads a pub, any pub as quickly and easily as possible?

Or should they be thrown into a multi-journal fight so as to fully experience the joys of desk rejection, ultimate denial after four rounds of review somewhere and the final relief of just dumping that Frankensteinian monster of a paper in a lowly journal and being done. 

Do you plan any of this out for your newest trainees?

19 responses so far

#icanhazpdf and related criminal behavior

I was slow to start watching "Better Call Saul" for various reasons. Partially because I still haven't finished "Breaking Bad", partially because I couldn't see *that* as being the spinoff character and partially because I just hadn't gotten around to it. Anyway, the show is about a lawyer who we know from BB becomes deeply involved with criminal law.

There's a point in Season 1 where one character has a heart to heart with another character about the second person's criminal act.

"You are a criminal."

He then goes on to explain that he has known good guy criminals and a bad guy cops and that at the end of a day, committing a crime makes you a criminal.

Anyway, dr24hours has some thoughts for those criminal scientists who think they are good guys for illegally sharing PDFs of published journal articles.

22 responses so far

Manuscript acceptance based on perceived capability of the laboratory

Dave asked:

I think about it primarily in the form of career stage representation, as always. I like to get reviewed by people who understand what it means to me to request multiple additional experiments, for example.

and I responded:

Are you implying that differential (perceived/assumed) capability of the laboratory to complete the additional experiments should affect paper review comments and/or acceptance at a particular journal?

I'm elevating this to a post because I think it deserves robust discussion.

I think that the assessment of whether a paper is 1) of good quality and 2) of sufficient impact/importance/pizzazz/interest/etc for the journal at hand should depend on what is in the manuscript. Acceptance should depend on the work presented, for the most part. Obviously this is were things get tricky because there is critical difference here:

This is the Justice Potter Stewart territory, of course. What is necessary to support and where lies the threshold for "I just wanna know this other stuff"? Some people have a hard time disentangling their desire to see a whole 'nother study* from their evaluation of the work at hand. I do recognize there can be legitimate disagreement around the margin but....c'mon. We know it when we see it**.

There is a further, more tactical problem with trying to determine what is or is not possible/easy/quick/cheap/reasonable/etc for one lab versus another lab. In short, your assumptions are inevitably going to be wrong. A lot. How do you know what financial pressures are on a given lab? How do you know, by extension, what career pressures are on various participants on that paper? Why do you, as an external peer reviewer, get to navigate those issues?

Again, what bearing does your assessment of the capability of the laboratory have on the data?

__
*As it happens, my lab just enjoyed a review of this nature in which the criticism was basically "I am not interested in your [several] assays, I want to see what [primary manipulation] does in my favorite assays" without any clear rationale for why our chosen approaches did not, in fact, support the main goal of the paper which was to assess the primary manipulation.

**One possible framework to consider. There are data on how many publications result from a typical NIH R01 or equivalent. The mean is somewhere around 6 papers. Interquartile range is something like 3-11. If we submit a manuscript and get a request to add an amount of work commensurate with an entire Specific Aim that I have proposed, this would appear to conflict with expectations for overall grant productivity.

26 responses so far

Older posts »