Archive for the 'Scientific Publication' category

Review unto others

I think I've touched on this before but I'm still seeking clarity.

How do you review?

For a given journal, let's imagine this time, that you sometimes get manuscripts rejected from and sometimes get acceptances.

Do you review manuscripts for that journal as you would like to be reviewed?

Or as you have perceived yourself to have been reviewed?

Do you review according to your own evolved wisdom or with an eye to what you perceive the Editorial staff of the journal desire?

31 responses so far

Revise After Rejection

This mantra, provided by all good science supervisor types including my mentors, cannot be repeated too often.

There are some caveats, of course. Sometimes, for example, when the reviewer wants you to temper your justifiable interpretive claims or Discussion points that interest you.

It's the sort of thing you only need to do as a response to review when it has a chance of acceptance.

Outrageous claims that are going to be bait for any reviewer? Sure, back those down.

17 responses so far

Bias at work

A piece in Vox summarizes a study from Nextions showing that lawyers are more critical of a brief written by an African-American. 

I immediately thought of scientific manuscript review and the not-unusual request to have a revision "thoroughly edited by a native English speaker". My confirmation bias suggests that this is way more common when the first author has an apparently Asian surname.

It would be interesting to see a similar balanced test for scientific writing and review, wouldn't it?

My second thought was.... Ginther. Is this not another one of the thousand cuts contributing to African-American PIs' lower success rates and need to revise the proposal extra times? Seems as though it might be. 

22 responses so far

Strategic advice

Reminder for when you are submitting your manuscript to a dump journal.

Many of the people involved with what you consider to be a dump journal* for your work may not see it as quite so lowly a venue as you do.

This includes the AEs and reviewers, possibly the Editor in Chief as well. 

Don't patronize them. 
___

*again, this is descriptive and not pejorative in my use. A semi respectable place where you can get a less than perfect manuscript published without too much hassle**.

**you hope.

43 responses so far

A new way to publish your dataset #OA waccaloons!

Elsevier has a new ....journal? I guess that is what it is.

Data in Brief

From the author guidelines:

Data in Brief provides a way for researchers to easily share and reuse each other's datasets by
publishing data articles that:

Thoroughly describe your data, facilitating reproducibility. Make your data, which is often buried in supplementary material, easier to find. Increase traffic towards associated research articles and data, leading to more citations. Open up doors for new collaborations.
Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.

At the moment they only list Section Editors in Proteomics, Materials Science, Molecular Phylogenetics and Evolution, Engineering and Genomics. So yes, there will apparently be peer review of these datasets:

Because Data in Brief articles are pure descriptions of data they are reviewed differently than a typical research article. The Data in Brief peer review process focuses on data transparency.

Reviewers review manuscripts based on the following criteria:
Do the description and data make sense? Do the authors adequately explain its utility to the community? Are the protocol/references for generating data adequate? Data format (is it standard? potentially re-usable?) Does the article follow the Data in Brief template? Is the data well documented?

Data in Brief that are converted supplementary files submitted alongside a research article via another Elsevier journal are editorially reviewed....

Wait. What's this part now?

Here's what the guidelines at a regular journal, also published by Elsevier, have to say about the purpose of Data in Brief:

Authors have the option of converting any or all parts of their supplementary or additional raw data into one or multiple Data in Brief articles, a new kind of article that houses and describes their data. Data in Brief articles ensure that your data, which is normally buried in supplementary material, is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. Authors are encouraged to submit their Data in Brief article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your Data in Brief article will automatically be transferred over to Data in Brief where it will be editorially reviewed and published in the new, open access journal, Data in Brief. Please note an open access fee is payable for publication in Data in Brief.

emphasis added.

So, for those of you that want to publish the data underlying your regular research article, instead of having it go unheeded in a Supplementary Materials pdf you now have the opportunity to pay an Open Access fee to get yourself a DOI for it.

20 responses so far

PLoS Has Angered the PastramiMachine!!!!

@pastramimachine is WICKED PISSED!

from the page charges (aka OA fee or whatever you want to call it. $2250 at PLoS Genetics according to this guy, I think it is still around $1350 for non GlamourHounds.)

So what has him so angree?

Dude, do you have the slightest idea what people make in the private sector? At the executive level? Of a company with $40-50 Million in gross revenue?

The median total compensation package for CEOs totaled $378,000.

The report says that companies with $25-49.9M in annual revenue are at the median for CEO pay.

I understand that academics don't get paid as well as they might be but....surely you have SOME idea of what private sector jobs pay? And say, what do University Presidents pull down, anyway?

So what? Is there any business that fails to advertise itself? In the hopes of growing in size or at least maintaining current revenues? Where's the evidence this is excessive? Have you any idea what PLoS is up against in terms of the advertising budgets of NPG, Springer, Elsevier, etc?

This is ridiculous. It's like asking your home builder why he doesn't have a lumbering operation and saw mill out in Oregon or up in Saskatchewan or wheretfever the 2X4s come from. This Aries Systems Corp is the outfit that built the EditorialManager system used by several academic publishers. It's a service provider. Why would the publisher of a journal re-invent the wheel?

Lobbying activities to keep Elsevier from playing penny-ante shenanigans with Congress to totally obliterate the requirement to deposit manuscripts in Pub Med Central, perhaps? Or related efforts? Sure, PLoS lobbying activities may be mostly for them but it seems that having a wealthy organization opposing the pay journals works out well for the OA fans. How can you complain about this, guy?

Maybe I missed something? When did PLoS say they weren't a company? And heck, many not-for-profit entities have investment portfolios. Starting with your local University (say, Rutgers, for example). What do you think the "endowment" is? This is no crime. This is responsible stewardship of a business entity. Building up a cushion against future changes in the business climate. Smart work, PLoS! Somehow I don't think PastramiMachine would be too happy if PLoS went belly-up and all of the published papers disappeared because there was nothing to pay the server fees with!

This guy is delusional, mostly because of his stated belief that PLoS is some sort of capital-gee GoodGuy. That's on you, dude, not on PLoS.

__

More from Odyssey who picked out some replies from Michael Eisen.

81 responses so far

When reviewers can't relinquish their bone

Had an interesting category of thing happen on peer review of our work recently.

It was the species of reviewer objection where they know they can't lay a glove on you but they just can't stop themselves from asserting their disagreement. 

It was in several different contexts and the details differed. But the essence was the same. 

I'm just laughing.

I mean- why do we use language that identifies the weaknesses, limits or necessary caveats in our papers if it doesn't mean anything?

Saying "...and then there is this other possible interpretation" apparently enrages some reviewers that this possibility is not seen as a reason to prevent us from publishing the data. 

Pointing out that these papers over here support one view of accepted interpretations/practices/understanding can trigger outrage that you don't ignore those in favor of these other papers over there and their way of doing things. 

Identifying clearly and carefully why you made certain choices generates the most hilariously twisted "objective critiques" that really boil down to "Well I use these other models which are better for some reason I can't articulate."

Do you even scholarship, bro?

I mostly chuckle and move on, but these experiences do tie into Mike Eisen's current fevers about "publishing" manuscripts prior to peer review. So I do have sympathy for his position. It is annoying when such reviewer intransigence over non-universal interpretations is used to prevent publication of data. And it would sometimes be funny to have the "Your caveats aren't caveatty enough" discussion in public.

9 responses so far

Blooding the trainees

In that most English of pastimes, fox hunting, the noobs are smeared about the face with the blood of the poor unfortunate fox after dismembering by hound has been achieved.

I surmise the goal is to get the noob used to the less palatable aspects of their chosen sporting endeavor. 

Anyway, speaking of manuscript review and eventual publication, do you plan a course for new trainees in the lab?

I'm wondering if you have any explicit goals for them- Should a mentor try to get new postdocs or grads a pub, any pub as quickly and easily as possible?

Or should they be thrown into a multi-journal fight so as to fully experience the joys of desk rejection, ultimate denial after four rounds of review somewhere and the final relief of just dumping that Frankensteinian monster of a paper in a lowly journal and being done. 

Do you plan any of this out for your newest trainees?

19 responses so far

Manuscript acceptance based on perceived capability of the laboratory

Dave asked:

I think about it primarily in the form of career stage representation, as always. I like to get reviewed by people who understand what it means to me to request multiple additional experiments, for example.

and I responded:

Are you implying that differential (perceived/assumed) capability of the laboratory to complete the additional experiments should affect paper review comments and/or acceptance at a particular journal?

I'm elevating this to a post because I think it deserves robust discussion.

I think that the assessment of whether a paper is 1) of good quality and 2) of sufficient impact/importance/pizzazz/interest/etc for the journal at hand should depend on what is in the manuscript. Acceptance should depend on the work presented, for the most part. Obviously this is were things get tricky because there is critical difference here:

This is the Justice Potter Stewart territory, of course. What is necessary to support and where lies the threshold for "I just wanna know this other stuff"? Some people have a hard time disentangling their desire to see a whole 'nother study* from their evaluation of the work at hand. I do recognize there can be legitimate disagreement around the margin but....c'mon. We know it when we see it**.

There is a further, more tactical problem with trying to determine what is or is not possible/easy/quick/cheap/reasonable/etc for one lab versus another lab. In short, your assumptions are inevitably going to be wrong. A lot. How do you know what financial pressures are on a given lab? How do you know, by extension, what career pressures are on various participants on that paper? Why do you, as an external peer reviewer, get to navigate those issues?

Again, what bearing does your assessment of the capability of the laboratory have on the data?

__
*As it happens, my lab just enjoyed a review of this nature in which the criticism was basically "I am not interested in your [several] assays, I want to see what [primary manipulation] does in my favorite assays" without any clear rationale for why our chosen approaches did not, in fact, support the main goal of the paper which was to assess the primary manipulation.

**One possible framework to consider. There are data on how many publications result from a typical NIH R01 or equivalent. The mean is somewhere around 6 papers. Interquartile range is something like 3-11. If we submit a manuscript and get a request to add an amount of work commensurate with an entire Specific Aim that I have proposed, this would appear to conflict with expectations for overall grant productivity.

26 responses so far

Priority

I am working up a serious antipathy to the notion of scientific priority, spurred most recently by the #ASAPbio conference and the associated fervent promotion of pre-print deposit of scientific manuscripts.

In science, the concept of priority refers to the fact that we think of the first person to [think up, discover, demonstrate, support, prove, find, establish] something as somehow special and deserving of credit.

For example, the first paleontologist to show that this odd collection of fossils over here belonged to a species of Megatyrannoteethdeath* not previously known to us gets a lot of street cred for a new discovery.

Watson and Crick, similarly, are famed for working out the double helical structure of DNA** because they provided the scientific community with convincing amounts of rationale and evidence first.

Etc.

Typically the most special thing about the scientists being respected is that they got there first. Someone else could have stumbled across the right bits of fossil. Many someones were hotly trying to determine how DNA was structured and how it worked.

This is the case for much of modern bioscience. There are typically many someones that have at least thought about a given issue, problem or puzzle. Many who have spent more than just a tiny bit of thought on it. Sometimes multiple scientists (or scientific groups, typically) are independently working on a given idea, concept, biological system, puzzle or whathaveyou.

As in much of life, to the victor go the spoils. Meaning the Nobel prize in some cases. Meaning critical grant funding in other cases- funding that not only pays the salary of the scientists with priority but that goes to support their pursuit of other "first" discoveries. Remember in the Jurassic Park movies how the sober paleontology work was so desperately in need of research funds? That. In addition, the priority of a finding might dictate which junior scientists get Professorial rank jobs, the all-important credit for publication in a desired rank of scientific journal and ultimately the incremental accumulation of citations to that paper. Finally, if there ends up being a commercial value angle, the ones who have this priority may profit from that fact.

It's all very American, right? Get there first, do something someone else has not done and you should profit from that accomplishment. yeeehaw***.

Problem is......****

The pursuit of priority holds back the progress of science in many ways. It keeps people from working on a topic because they figure that some other lab is way ahead of them and will beat them to the punch (science always can use a different take, no two labs come up with the exact same constellation of evidence). It unfairly keeps people from being able to get rewarded for their work (in a multi-year, multi-person, expensive pursuit of the same thing does it make sense that a 2 week difference in when a manuscript is submitted is all-critical to the credit?). It keeps people from collaborating or sharing their ideas lest someone else swoop in and score the credit by publishing first. It can fuel the inability to replicate findings (what if the group with priority was wrong and nobody else bothered to put the effort in because they couldn't get enough credit?).

These are the things I am pondering as we rush forward with the idea that pre-publication manuscripts should be publicized in a pre-print archive. One of the universally promoted reasons for this need is, in fact, scientific priority. Which has a very, very large downside to it.
__
*I made that Genus up but if anyone wants to use it, feel free

**no, not for being dicks. that came later.

***NSFW

****NSFW

45 responses so far

Older posts »