Feb 18 2016 Published by drugmonkey under Conduct of Science, Science Publication
I do not know why they don't just submit their stuff to JIF 3 journals.
Everything would be "accept, no revisions and can we get you a coffee Professor?"
All their supposed problems would be solved.
26 responses so far
But that isn't what happens. IME, peer review rigor is not the primary variable that distinguishes journal tiers, editor selection is. This is about communicating a unit of work when you want to and being able to formally refer to it. Then let whatever validation journals do happen later.
Let's try that experiment and see what happens.
In my field, JIF of 3 is a damn fine journal. I'd be THRILLED if it were so easy to get in to them.
Drugmonkey, it's like you read my mind. I keep wondering why Leslie Vosshall is championing preprints rather than submitting everything to PLOS One or Scientific Reports. I'd bet a grad student's stipend that any of the papers that she spent a year getting accepted at C/N/S would have flown right into one of those two journals. I can't figure out why submitting a paper to a preprint archive and then wasting a year doing more experiments for Nature is better for science than getting a peer reviewed and copy edited paper published in a decent journal within a couple months.
Also, my experience at low IF journals is not that the reviews are of lower quality than a high IF journal. It's that the reviewers/editors will accept less upon revision, i.e., they're more likely to be ok with a text response as opposed to an experimental response. And isn't that one of the main gripes the preprint people have with traditional publishing?
As an AE at PLOS ONE, (JF 3.24), I can assure you at least with the papers I handle, and the reviewers I select, "accept, no revisions" is something that literally *never* happens. At *best* something might get through one round of minor revisions. (although one round of major revision and one of minor is more common).
NPI- Supposedly. Yet I am roundly ignored when I tell people like Eisen to experiment with nonprestige, pedestrian journals.
Um, Michael Eisen has 11 PLOS ONE papers and his brother Jonathan has 17. The Eisen brothers aren't strangers to "nonprestige, pedestrian journals"
I am not talking about waccaloon OA demonstration pieces that are not at all like regular journals.
It only counts if they are in low-impact Elsevier journals?
Or Springer. Etc.
@NPI - "I can't figure out why submitting a paper to a preprint archive and then wasting a year doing more experiments for Nature is better for science...."
Because, unless you were born yesterday, you would know that it is less about science and more about personal gain.
Aside from the folks that comment on this blog, for every other academic I have talked to, almost without exception, CNS (and the IF prestige associated with them) is a HUGE deal, especially for those that are now used to publishing frequently in those avenues. Most of them don't give a flying fuck about PLoS or other journals in that IF vicinity except to dump some additional, tangential minor results from a major study to squeeze the last drops of juice from their data.
The society-level journals in my field (well, of my fields) are in that range.
They like the idea of pre-prints because it gives them the ability to stake their claim to a finding *and* still spend years wrangling with CNS for publication. Rather than get beat by a lab willing to go to JCB.
Does anyone know of a study of submission to publication lagtimr stratified by IF?
Are preprints really staking claim? Are they indexed in PubMed? How does anyone even know about these 'publications'?
Sure - it depends - does a paper? If a paper does, is it based on date of submission, acceptance, publication?,
BioRxiv RSS feed, Twitter, etc?
"Are preprints really staking claim? Are they indexed in PubMed? How does anyone even know about these 'publications'?"
-They show up in Google Scholar, which is where a lot of people do their literature searches these day (I like it cause theses come up, for instance). You get a citation to put on your grant apps.
Meeting abstracts show up in Google Scholar and ISI. We should be citing them. For priority.
jmz4- journals play shenanigans with "reject, but please resubmit a revision as a new submission" to defeat anyone figuring out the true lag time.
I'm good with citing meeting abstracts, which many journals seem not to prohibit. Or maybe the copyeditors don't check too closely. Generally if I can't find a follow-up pub on Google Scholar, as a courtesy I try to contact the poster authors first to see if they have an in-review manuscript they'd like me to cite instead of the abstract, so that the citation 'counts'.
For manuscripts that rise from the ashes of conference presentations, I've taken to adding an author note ("Portions of this work were presented at the 29th Annual Conference on Bunny Hopping in 2012") in the unlikely event that I find something cool and get scooped before the paper hits. This satisfies the smug 'I was here first' 5-year-old in me and doesn't seem overly obnoxious.
Really, the major impetus behind the support for pr33ps, at least among the BSD types, is to bypass the rule that glam magz won't consider a manuscript in review elsewhere. The pr33p can be used as leverage, essentially pitting glam mag against glam mag.
IIRC, review times are longest at the low and high ends of the IF spectrum, with a dip in the middle. However, it is impossible to tell if this is due to reality or systematic differences in reporting of submission dates across the spectrum.
The reason meeting abstracts are thought to be inadequate for citation or priority is that they don't include results or methods that would allow anyone to test or follow up on your findings. In many fields these are often preliminary results that the authors don't deem ready to publish. The idea is that pr33ps are "complete" works by whatever standard exists for you or your field.
Big picture is that peer review is slow everywhere. Reviewers ask for more experiments everywhere (true even for the majority of papers I've edited at PLOS One, though as AE I overrule them whenever possible). Nothing should stop people for putting complete papers in an accessible, citable format whenever they want to for whatever reason.
I would say the people pr33ps work most *against* are the glamhound airbags who ram questionable stuff in through influence with pliable editors. If STAP or arsenic life had been posted as a preprints they never would have been published in journals. I would love for the community to get a crack at some of this me-too opto bullshit before it gets tested against glam editor credulity. If there were comments on preprints, you can bet editors and reviewers would read them.
What is going to define or enforce the completeness of the pre print?
What keeps people from doing improvised poetry when they give conference talks?
For the small town grocers who are living grant-to-grant, looking at pr33ps as a way to show productivity come renewal time: be aware that productivity is always relative. If right now a study section expects productivity in the range of 5 papers per R01 (hypothetical example), it's not like you'll be able to show 2 papers + 3 pr33ps as a sign of productivity. Nay, if pr33ps become established, perhaps you'll still have to keep up with the big labs that can put out 5 papers + 5 pr33ps in the same time frame.
DrugMonkey is an NIH-funded researcher who blogs about careerism in science. And occasionally about the science of drug use.
Site Admin | Theme by Niyaz
Drugmonkey Copyright © 2019 All Rights Reserved