Placing PLoS ONE in the appropriate evaluative context

Jan 14 2013 Published by under Impact Factor, PLoS ONE, Ponder

As you know, I have a morbid fascination with PLoS ONE and what it means for science, careers in science and the practices within my subfields of interest.

There are two complaints that I see as supposed objective reasons for old school folks' easy complaining bout how it is not a real journal. First, that they simply publish "too many papers". It was 23,468 in 2012. This particular complaint always reminds me of

which is to say that it is a sort of meaningless throwaway comment. A person who has a subjective distaste and simply makes something up on the spot to cover it over. More importantly, however, it brings up the fact that people are comparing apples to oranges. That is, they are looking at a regular print type of journal (or several of them) and identifying the disconnect. My subfield journals of interest maybe publish something between about 12 and 20 original reports per issue. One or two issues per month. So anything from about 144 to 480 articles per year. A lot lower than PLoS ONE, eh? But look, I follow at least 10 journals that are sort of normal, run of the mill, society level journals in which stuff that I read, cite and publish myself might appear. So right there we're up to something on the order of 3,000 article per year.

PLoS ONE, as you know, covers just about all aspects of science! So multiply my subfield by all the other subfields (I can get to 20 easy without even leaving "biomedical" as the supergroup) with their respective journals and.... all of a sudden the PLoS ONE output doesn't look so large.

Another way to look at this would be to examine the output of all of the many journals that a big publisher like Elsevier puts out each year. How many do they publish? One hell of a lot more that 23,000 I can assure you. (I mean really, don't they have almost that many journals?) So one answer to the "too many notes" type of complaint might be to ask if the person also discounts Cell articles for that same reason.

The second theme of objection to PLoS ONE is as was recently expressed by @egmoss on the Twitts :

An 80% acceptance rate is a bit of a problem.

So this tends to overlook the fact that much more ends up published somewhere, eventually than is reflected in a per-journal acceptance rate. As noted by Conan Kornetsky back in 1975 upon relinquishing the helm of Psychopharmacology:

"There are enough journals currently published that if the scientist perseveres through the various rewriting to meet style differences, he will eventually find a journal that will accept his work".

Again, I ask you to consider the entire body of journals that are normal for your subfield. What do you think the overall acceptance rate for a given manuscript might be? I'd wager it is competitive with PL0S ONE's 80% and probably even higher!

49 responses so far

  • bill says:

    I'd really like to see some good data on the "most papers get published somewhere" question -- just what is the overall "acceptance rate" for papers submitted, over and over again if necessary, to biomed journals in general? And of course, how does that compare to the PLOS ONE rate?

    It would also be interesting, though more difficult, to ask whether the resubmission roundabout resulted in any substantive improvements to the mss...

    I've asked around, and no one seems to have much data on this. Heather Piwowar found some studies done on individual journals, e.g. how much of what Nature rejects ends up published somewhere.

  • Joshua Gowin says:

    I often hear people dismiss PLOS articles as follows: "They had to PAY to publish their paper."

  • drugmonkey says:

    Unlike the host of well respected journals with page charges, right JG?

  • drugmonkey says:

    bill- it's cross-journal and cross-publisher...and I'd estimate that just about every journal would be motivated to ignore 1) what they reject that gets in elsewhere and 2) how much of what they accept was rejected elsewhere first. So good luck figuring this out in anything more than anecdotes.

    Me, I'm in no small part informed by 1) the number of manuscripts that I've felt were publishable (and sent somewhere) that never got in anywhere (zero) and 2) the number of crappy manuscripts I've reviewed negatively that I later noticed were published elsewhere (nonzero). so yeah, it is anecdote but damn it is strongly convincing.

    Glamour mags are a bad example because of where they fall in the ecosphere.

  • It also kind of depends on *why* the paper was rejected. If the paper was rejected from journal X for being scientifically unsound, then yes, it is annoying that it gets published elsewhere. If something is just rejected from journal X because the results are "uninteresting", then it isn't a problem -- what is "interesting" or "uninteresting" is very subjective and history is full of papers thought to be uninteresting (a certain paper on breeding peas that a monk published in an obscure German language Czechoslovakian journal, for example) that later turned to be more interesting than it seemed to be at the time.

  • Busy says:

    I'd really like to see some good data on the "most papers get published somewhere" question -- just what is the overall "acceptance rate" for papers submitted, over and over again if necessary,

    For what is worth, here's my personal data point. About 95-97% of my papers end up being published somewhere.

  • Busy says:

    First, that they simply publish "too many papers".

    I've heard this one before many times and do not see the logic to it. If the complaint is that things get lost in the literature, this complained is rendered moot with technology such as Google, Google scholar, and Google Alerts. In this day and age of internet search and automated RSS feeds we should be happy to publish anything that is methodologically clean and let people use it as they may.

    If the complain is about the number alone let us consider that the Large Hadron Collider "publishes" terabytes of data a day for use by the community in meta-studies and this has become one of the main and most efficient ways to conduct experiments in high energy physics.

  • bsci says:

    I think the trouble with the PLoS ONE volume is the lack of useful filters. I identify papers to read using RSS feeds. Even with the PLoS ONE subheadings, I feel like it's a mess to read so many article titles and I'm sure I overlook some good ones. (Though thanks to this post, I just noticed that most of my PLoS ONE RSS feeds like the neuroscience feed seemed to have stopped updating in early December, which explains why I've been able to keep on top of tables of contents a bit better) If I look at a more focused journal, I can better tune my attention to relevant stuff.

    Also, since there are fewer quality filters, the bar from seeing an interesting title to deciding to read the article is a bit higher. Still the journal has clearly proven they publish good stuff there and I do attention try to give articles with relevant titles and good abstracts a read.

  • Physician Scientist says:

    I stopped reviewing for PlosOne. They kept sending me stuff that wasn't even near my expertise. Like really far away - eg. "Anticancer and antimicrobial activities of some antioxidant-rich Cameroonian medicinal plants" for a molecular biologist/immunologist. I could list several more.

    With an 80% acceptance rate and likely 33,000 submissions a year, I don't think the reviewers are chosen well or in line with expertise just due to volume. I trust journals with lower numbers, lower acceptance rates and a greater qualified reviewer:paper ratio.

  • Dr Becca says:

    bsci, you can create RSS feeds of your areas of interest or keyword searches in pubmed--i.e., make your own filters, rather than feel constrained to/overwhelmed by pre-determined journal divisions. I have a folder in my google reader that gets updated whenever a new hit for one of my different searches comes up. It probably misses things once in a while, but certainly gives me enough to keep me busy.

  • whimple says:

    I've never had a paper rejected by PlosONE, but I did have a paper come back with "revision" requirements from a reviewer that were so far out in the "can't get there from here" zone that after one polite rebuttal letter/resubmission, I published elsewhere. Do they actually reject papers, or just string authors along until the authors walk away? Does it count as "rejected" if it is submitted, but not subsequently published? The goal of submitting there was rapid turn around time, in which regard PlosONE utterly failed to deliver.

  • Ola says:

    My biggest concern with PLoS is the pricing. They have none of the legacy costs of the old fashioned publishing houses, yet still think it's OK to gouge $1500 per paper. Not cool, considering their baseline web hosting costs can't be that high. Certainly not in the $30m range !

  • @Physician Scientist
    As with any journal, there is no PLoS ONE "they" as a collective, just individual academic editors who decide who to request as a reviewer. Some editors (probably beginning ones) don't understand that you have to actually read the article to find good peer reviewers (people the article cites for example). I somewhat fault the web interface for editors, as it has a function to suggest reviewers from the database of potential reviewers based on keywords, maybe of which are far too broad (like "microbiology" or "bioinformatics").

  • @Ola
    The money isn't just from hosting; while the academic editors and reviewers are volunteers, there are also paid staff such as typesetters. Yes, some journals in fields like physics/math/CS avoid this by demanding that submissions be already typeset in LaTeX or something, I'm not sure the typical biologist could do that. Although I'd love to be proved wrong -- I did my dissertation using LaTeX despite being a microbiologist. (Yes, I'm a geek.)

    It's worth noting that other OA journals seem to have roughly the same price as PLoS ONE, though, which suggests that it is pretty much the necessary cost. -- there was that PeerJ startup thing people talked about last year that claimed to be able to do it cheaper, but that seems to be vaporware.

  • Ben Mudrak says:

    @Busy:

    This doesn't really provide the overall answer that you're looking for, but the British Journal of Surgery checked up on 926 manuscripts that it rejected in 2006 and found that 65.8% had been published as of early 2009 (http://www.ncbi.nlm.nih.gov/pubmed/20099256). These results were just based on a PubMed search, too, so they'd miss anything that isn't indexed by PubMed.

  • Eric Moss says:

    I bet at least 95-97% of teenagers who apply to PECU (Prestigious East Coast University) end up going to college somewhere. That doesn't mean all college students are equally qualified or capable. Nor were they necessarily 'improved' by the process.

  • DrugMonkey says:

    I've met quite a number of idiot morons with Yale or Harvard degrees and some of the most awesome people in the world who went to University of State.

  • DrugMonkey says:

    Also, it is pretty well established that the main value of the Ivy League is the company you keep and the dubiously accurate reputation of the University rather than anything about the specific graduate or the education that she has received.

  • toto@club-med.so says:

    The two objections you cite do make sense under an implicit assumption, which is that one important function of journals is to act as a rough filter for "what's worth reading". You can't read all the abstracts ever published (really, you can't), so the journal hierarchy offers you a cue for prioritizing your reading. Papers in Sci-ture will tend to show more important / interesting stuff than papers in the Patagonian Journal of Medical Hypotheses.

    Of course, that's exactly the assumption that PLoS One does away with. Everything goes in, so the only method left for a priori selection is just dumb random chance.

    Of course the holy grail would be to successfully harness community evaluation - something like Slashdot moderation on steroids. Several venues are trying to do this (PLoS One, Frontiers, etc.), so maybe it will happen eventually.

  • Grumble says:

    "Do they actually reject papers, or just string authors along until the authors walk away?"

    I can tell you from personal experience that PlosOne does reject papers.

    I can also tell you that the reasons for the rejection were utterly moronic, and that the editor exercized absolutely no discretion in dealing with an out-of-control reviewer.

    Finally, I can tell you that this negative experience means it's unlikely you'll find me submitting to Plos Fucking One ever again.

    So there.

  • bsci says:

    @Dr Becca, Perhaps I should do pubmed RSS feeds too, but Journal table-of-contents feeds help me keep slightly informed on general stuff in my fields that I might not actively search for. In some ways the PLoS ONE system is the worst of of Journal TOC feeds and pubmed feeds, because so much is neither specifically or generally relevant to my research interests. My favorite RSS feeds for my specific research areas are just papers that cite my papers. I obviously miss stuff there, but people who cite me more-often-than-not are doing things that are highly relevant to my work. For what it's worth, I think PLoS changed their RSS feed system in December so I needed to recreate new feeds through their internal search program based on subject areas.

  • DrugMonkey says:

    Toto-

    Wanting some other person (and particularly a professional editor at a GlamourMag) to tell you what is important in science is not being a scientist, it is being a sheep.

  • DrugMonkey says:

    Grumble-

    Do you say that every time a journal rejects your paper? because you wouldn't have sent it if you didn't think it deserved to get in, right? Why is PloS ONE any different if it fails to live up to your expectations?

  • We reject about 30-35% of submissions to PLOS ONE, mostly after peer review. Our Academic Editors (http://www.plosone.org/static/edboard.action) select reviewers and handle the review process , with the help of PLOS staff if needed.

    For rejection decisions that may not be in accordance with our publication criteria or are otherwise disputed, we do have an appeals process: http://blogs.plos.org/everyone/2011/09/03/ask-everyone-how-to-submit-an-appeal-request/

    Our post-publication 'community evaluation' is a combination of Article-Level Metrics (see http://www.plosone.org/static/almInfo; http://article-level-metrics.plos.org/) and article commenting (guidelines here: http://www.plosone.org/static/commentGuidelines.action). Search results can be sorted according to different metrics to help filter the results.

    (I am an Associate Editor for PLOS ONE, i.e. salaried staff)

  • Our product team is in the process of improving the table of contents email (which is imminent) and the RSS feeds (as soon as possible). We're sorry that the redesign caused some glitches for readers; as well as making everything hopefully look better, the redesign also involved changes to the back-end software to improve the publishing platform.

  • WS says:

    OK, so the actual acceptance rate is more like 65-70%. Keep in mind that the reason the acceptance rate is much lower at other journals is because most papers at the snobbier journals never get past the editor, many of whom are "career editors," i.e. they were not good enough to make a career as a scientist. I am guessing that those editors cross-reference each submitted manuscript against a list of approved authors and/or institutions and if no match is found, they reject it with one of a few standard lines. If often think that if I created life in a test tube and submitted the manuscript to Nature with full mechanistic details I would soon receive a response along the lines of "the subject matter is not suitable for the broad readership of Nature. Perhaps you should consider submitting to a more specialized journal." O.K. that might be a bit of an exaggeration, but only a bit.

  • Grumble says:

    DM -

    It's a phenomenon called "conditioned avoidance."

    There are plenty of other podunk little journals that are perfectly appropriate for publishing those of my stories that happen also to be podunk. Why should I tear my hair out arguing with PlosOne's incompetent and amaturish editors about the STOOPIDEST LITTLE PAPER when I can deal with experts at society journals who (a) have been editors of journals for a long time, so that (b) they know what the fuck they are doing?

    And if the story happens to be more important, I'm not going to submit it to the likes of PlosOne anyway.

  • DJMH says:

    Wanting some other person (and particularly a professional editor at a GlamourMag) to tell you what is important in science is not being a scientist, it is being a sheep.

    Right! That's why, instead of following any one particular blog, I just google all of the internet, every day, to find blogs that might be publishing something of interest to me, using a wide range of relevant keywords that changes on a monthly basis as my own interests evolve. Then I screen through each of the 200 hits that come up each day, and read and evaluate them all.*

    * Actually, what I do is check out blogs of known interest to me based on previous writings on those blogs, i.e. prescreen everything for quality based on prior evidence of quality. I guess this makes me a Sheep of the Internet.

  • DrugMonkey says:

    My response depends on whether you check my blog first and most consistently DJMH.

  • Busy says:

    That doesn't mean all college students are equally qualified or capable. Nor were they necessarily 'improved' by the process.

    Did any one claim that by any chance?

    p.s. On a different note thanks to Matt H. for the data.

  • DrugMonkey says:

    It is pretty funny that some dude is acting all outraged at being rejected by PLoS ONE in the same thread others are arguing it should reject more to be taken seriously.

  • Confounding says:

    @DrugMonkey I can sympathize with his stance - I've certainly felt that way about journal rejections before, when the primary takeaway message from the review was "Reviewer N only half read the paper, and has asked some genuinely nonsensical questions. For this, my paper has been rejected".

    It's not the rejection. It's the quality of the reviews that prompted that rejection.

  • Spiny Norman says:

    "Papers in Sci-ture will tend to show more important / interesting stuff than papers in the Patagonian Journal of Medical Hypotheses."

    ...and consequently will also tend to show more stuff that is "wrong."

    The current system of publication in biomedical research provides a distorted view of the reality of scientific data that are generated in the laboratory and clinic. This system can be studied by applying principles from the field of economics. The “winner's curse,” a more general statement of publication bias, suggests that the small proportion of results chosen for publication are unrepresentative of scientists' repeated samplings of the real world. The self-correcting mechanism in science is retarded by the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication (journals with sufficiently high impact). This system would be expected intrinsically to lead to the misallocation of resources. The scarcity of available outlets is artificial, based on the costs of printing in an electronic age and a belief that selectivity is equivalent to quality. Science is subject to great uncertainty: we cannot be confident now which efforts will ultimately yield worthwhile achievements. However, the current system abdicates to a small number of intermediates an authoritative prescience to anticipate a highly unpredictable future. In considering society's expectations and our own goals as scientists, we believe that there is a moral imperative to reconsider how scientific data are judged and disseminated.

  • DrugMonkey says:

    So you are suggesting our reification of the two Journals of Type I Error as the pinnacle of achievement is a good thing?

  • Grumble says:

    PlosOne shouldn't reject more papers to be taken seriously. What it should do is reject more editors. Currently, its selection criteria for editors goes something like this:

    Applicant: "I want to be an editor."

    PlosOne: "OK."

    This sort of system was imperative for a journal that wanted to rapidly expand to cover almost all areas of science. Because it prioritized expansion over quality, the result is that the editors (at least in the fields I publish in) are inexperienced wannabes. I can see this having a detrimental effect on both acceptance (papers accepted because of nepotism or other bad reasons) and rejections (papers rejected because of incompetent editorial oversight of the review process).

    So I don't care about the acceptance rate, and I say to hell with them anyway.

  • DJMH says:

    Oh DM, you are the J Neurosci of my Internet reading.

    But seriously, I am never going to have time to screen a bazillion articles myself, in either science or the Internet. So in science, I use a combo of Pubmed alerts for authors plus TOC from about seven journals in my email. On the Internet, I use the NY Times to tell me what I need to know and Andrew Sullivan to tell me everything the Times isn't telling me. And certain science bloggers for the rest.

    People will always want and need heuristics for screening through the mass of what's out there. So I just think the energy is better spent on improving the results of those heuristics (e.g. acting as a reviewer, or founding a high quality journal with scientists as editors) than in claiming we should all read every science paper that comes out. 'Sides, isn't that sorta Kerny of you?

  • DrugMonkey says:

    Maybe PLoS ONE also serves as a training ground for editors... Interesting thought.

  • @Grumble
    People *ask* to be editors? Weird. Volunteering for extra unpaid work? First time I'd heard of that. They asked *me*. And to be honest, I thought it was more because I used to work for Jonathan Eisen (Michael's brother) than my great fame as a scientist.

    @DM
    Interesting idea. It certainly was my first experience in being an editor. Maybe now that it's clear I'll never be a Francis Crick I should aspire to be John Maddox instead.

  • Ola says:

    @Jonathan Badger
    There was that PeerJ startup thing people talked about last year that claimed to be able to do it cheaper, but that seems to be vaporware.

    Far from it. First issue is due out next month. If PeerJ can do it for $100, PLoS' $1350 (not $1500 as I said earlier) really begins to look bad. The fact that other OA journals charge similar fees is not a good qualifier - they charge the same because they see PLoS can get away with it.

    Regarding PLoS finances, this progress update shows their annual revenue for 2011-12 was $22.3m, with expenses of $18.3m, and they have about $12m banked. While their 22% profit margin is not quite as ridiculous as Elsevier's 36%, it's still pretty excessive (Universities/Hospitals/Medical Centers - i.e., the ones generating the data - typically run 4-6%).

  • antistokes says:

    a bit busy right now managing my manuscript through PLoS ONE's ... interesting review process, but i must briefly say i am actually mildly impressed by the kinetics thus far. not bad for a bunch of neurodudes. provided ya know how to....talk to them and the optics referees; sigh.

  • "PeerJ can do it for $100".
    $99 is the PeerJ membership fee per author that allows them to publish one paper per year, and each author has to be a member: https://peerj.com/pricing/

    "Applicant: 'I want to be an editor.' PlosOne: 'OK'."
    We consider requests to become an Academic Editor at PLOS ONE , but we don't approve all the requests as they need to be a well-established researcher. We also invite researchers who meet our criteria, e.g. when recommended by our existing editors.

    Matt

    (PLOS ONE Associate Editor, i.e. salaried staff)

  • drugmonkey says:

    they need to be a well-established researcher.

    What does this mean? I'm pretty sure I've seen names on the list (and self-identification in my discussion threads, fwiw) that are not professorial rank faculty yet.

    I'm not saying that I don't think that people who have not attained professorial rank can't be good academic editors and I would probably argue that they can be. Nevertheless, "well-established researcher" means professorial rank, probably at least associate professor and with a long list of pubs to many ears.

    since you are on the line, Matt, I'm still curious about the delay in assigning Academic Editors for papers and how this interacts with your selection/recruitment of Academic Editors. The opt-in system suggests strongly to me that you would address the delayed assignment issue best by having more specific experts. I'm assuming here that the closer a manuscript is to a given AE's area of personal interest, the more likely she is to take it on. So if you can identify domains of research that tend to languish without an editor stepping up (or is it more random?) then this should tell you where you need to devote recruiting efforts. right?

    Does PLOS ONE review their lists of manuscripts that go for longer than usual without an editor taking it on and identify possible topic clusters?

    One reason to make this a high priority item is that if a subfield has a few people have negative experiences, this can eventually poison the whole field's perception of the journal.

  • Academic Editors are almost always PIs and usually have a couple of dozen publications, but we have introduced stricter standards over time and there may be reasons to make exceptions.

    "If you can identify domains of research that tend to languish without an editor stepping up (or is it more random?) then this should tell you where you need to devote recruiting efforts."

    That's exactly what we're doing, and we're upping these efforts over the coming year. We put in a lot of effort to assigning papers to Academic Editors, and the staff editors like myself step in to identify suitable editors if our usual matching is unsuccessful. Most papers however are assigned to an editor within a few days of the first invite being sent.

  • drugmonkey says:

    So the non-professor Academic Editors are legacy from the early days? Is that what you mean?

  • drugmonkey says:

    Do you boot academic editors for not handling enough manuscripts ?

  • antistokes says:

    to Hodgkinson:

    thank you for your efforts. i know your eyes are bleeding right now under the sheer amount of manuscripts submitted.

    also the PLoS ONE office is very professional. (well, to me at least.) even if my manuscript is rejected, i might still be able to write a good review of your referee process to my PhD boss. who, by the way, is not exactly pleased i went into brain tumor diagnostics instead of protein vibrational spectroscopy.

    thinking about going back to protein crystal structures here. i am all for healthy competition, but i have TAed as a grad student in new york. long island undergrad pre-meds students are a bit.... worried. esp. since a lot of the pre-meds could not do basic electron pushing problem sets. (i TAed as a grad student, so i was the one grading them for the pre-med organic chem course. although we did switch it around for grading the tests, just to be fair. because if you're not fair, you might get sued.)

  • Spiny Norman says:

    "So you are suggesting our reification of the two Journals of Type I Error as the pinnacle of achievement is a good thing?"

    Bwahahahahahaha! It's not always error!

    Hendrik Schön, Editor-in-Chief!

  • A says:

    None of you have considered the 'science pollution factor' impact on science quality, applicability of science, and science effect in economics.

    Yeah, so far the publication struggle looks more like an artifitial survival pressure for a decaying economy than to what the mission of science should be.

    Yes, the many publications keep people working and generate income but make science cramped with data that is not scrutinized for the role of science in terms of education and applications. And make loopholes germination an easy access tool for big problems attributed to science and scientists and anything related to that. Rather more like a multipurpose sacrificial stone.

    The revenue/profit contribution to the economy is not the problem, but what can rather be perceived as a waste of resources, which is a constant idea over time for exhausting/ruling the masses. Or human farming.

    How about re reviweing publications?------ make free access

    Compilations of significant/compelling data?---- make free access

    Contrast those reviews to current theories for validation or not.

    ETC

    I find that more useful than openning constant subfields that diffuse everything, kind of like the constant description of 'new clinical syndromes' to be able to do research or 'distinguish yourself'

  • Kumar Sambamurti says:

    Rejection rates are a poor metric of the quality of a journal. Take a bank for example, if it rejects 80% of the loans but favors and freely funds its own friends regardless of creditworthiness, where will it be. The ultimate metric has to be the quality of papers actually published in the journal. I have had a paper rejected by PLOS ONE to be accepted by FASEB J at one point. The rejection or acceptance of a paper is really about the peer reviewers involved. Often the reviewers take it upon themselves to reject good work whereas other peer reviewers in the same journal have a different perspective for even poorer quality papers, the journal will publish the weaker paper and reject the stronger one. One of my early and importantly formative papers was rejected by Cell, Neuron, J Neuroscience and finally published in J Neurosci Res with well over 150 citations. In the meantime all that poor quality peer review wasted quality time of a young scientist.

    The bigger problem seems to be that all mediocre scientific gatekeepers cannot judge the quality of work and would like to rely on easily quantified metric instead. With this metric system, they have driven the value of location and numbers of publications, which has driven a lot of noisy science to high visibility places. Journals are capitalizing to sell themselves as well as possible. To bring back reason, the problem has to be sorted by evaluation of the actual accomplishments of scientists rather than numbers and locations. In this evaluation, deliberately false publications should be severely reprimanded (I am sure that these are very small despite all the noise and high profile), negligence should be slapped on the wrist, random nose should be silenced and only meaningful advances should be recognized and rewarded. However, a free and fair forum does need to be provided where every scientist can find a voice and not be silenced before that judgement can be made by a handful of self styled experts. Remember that Mendel's work was rejected by several leading journals to be "rediscovered" over 3 decades later. I am sure that a group of intelligent high school students of his time would have known better but the experts (workers in a specific area with a few publications) thought otherwise. PLOS ONE provides an excellent forum for dissemination of information gathered and their interpretations with additional follow up capabilities. That should be the sole purpose of a journal and not a forum for a lot of mudslinging and cacophony.

Leave a Reply


seven + 8 =