Question for the critic of pre-publication review (such as @mbeisen)

Oct 07 2015 Published by under Science Publication

Where is the incentive for authors to respond to post-publication review?

The pre-publication review hurdle induces authors to alter the manuscript. In ways minor (modifying the strength of conclusions, adding reference to relevant prior work) and major (new experiments).

Most often this improves the final work.

Taking away the barrier of initial publication removes the incentive to modify the manuscript. How is this to be replaced?

47 responses so far

  • dr24hours says:

    I'm a staunch supporter of pre-pub review, but I'm not sure this is the right question. People keep telling me that there's no real evidence that prepub review improves the quality of published manuscripts. That might be true (I don't know and haven't seen this supposed evidence). But it's not the main reason that I'm a fan of prepub review.

    My biggest issue is the filtering of duplicitous crap and nonsense. Let's stipulate for a moment that pre-pub doesn't improve manuscripts (not conceding the point, just imagining it's true). That doesn't address the many, many submissions to journals that get filtered out and *not published* because of peer review.

    I do concede we don't know how many manuscripts that is. I also concede that some crap and nonsense sneaks through. But in general, most journals that conduct honest peer review are not full of nonsense, even though, based on my review experience, they get a lot submitted to them.

    Eliminating pre-pub review means that science, pseudoscience, and lunacy all compete in the same space. And since no one actually conducts much post-pub review, and what there is is concentrated on hot, glamorous papers, eliminating pre-pub review means eliminating review. Look no further than arXiv to see what happens.

    Want to talk about a reproducibility crisis? Want to give every homeopath and ND and chiropractor instant credibility in the eyes of the public? Stop filtering their duplicitous crap at the level of publication.

  • dsks says:

    "People keep telling me that there's no real evidence that prepub review improves the quality of published manuscripts."

    Yeah, but absence of evidence isn't evidence... &c. The problem is that it's not easy to empirically verify the impact of pre-review on paper quality, so these folk are being a bit cheeky taking this line.

    N=1 here with a fair number of replicates... but this statement certainly doesn't jive at all with my experience. As much as I'll groan, moan and curse like a bastard about the process, my manuscripts have always improved having responded to reviewers comments appropriately and doing the suggested/requested experiments. I'm not really sure how this can ever not be the case (any personal anecdotes suggesting otherwise?). One can argue as to whether the extra experiments provide sufficient extra support to the conclusions as to have been worth the extra labor and resources, but any additional data is generally going to make a manuscript it stronger, right? Even if it's just approaching the same question from a different angle to hammer an extra nail into the take-home message.

    I was a believer in post-pub review at first (and in the whole online scienze discussion is da futures! thing), but it seems like it was all guff founded on an initial flurry of excitement and wishful thinking. The vast majority of folk who engage in this seem hung up on Westerns, and indications of fraud etc. Not a whole lot of proper discussion. And I'm willing to confess I'm part of the problem; I just don't comment on articles. Neither do any of my colleagues in the field. It's just not a method of communication that folk appear to be comfortable engaging in. They'd rather just email, or chat at conferences. So I agree with Dr24hrs that post-pub review, for the most part, just isn't really going to happen.

  • A bit of devil's advocate but: why do we CARE that the manuscript changes?

    Data lasts. Interpretations do not.

  • Jonathan Badger says:

    "Look no further than arXiv to see what happens."

    I think the "arXiv is full of crap" meme isn't really that well supported. Yes, you certainly can find isolated instances of crap or pseudo-science there, but the fact that the physics community has basically abandoned journals in favor of arXiv kind of implies that these aren't really a major problem in reality. Whether or not the arXiv model would work in biology is a different issue, of course.

  • Although I currently work in ecology, I was a research scientist in medicine for about a dozen years. What I observed then was that some folks were able to publish oodles of duplicitous material by submitting to non-peer reviewed journals. So I believe your assertion is correct that eliminating peer review across the board would be disastrous.

    The other problem we observe nowadays are the rise of the "assembly line" journals, which claim to conduct peer review, but are then found out to do it so marginally that loads of unworthy material gets published. Lastly, in my current field of ecology, I have observed what I characterize as frequent regurgitation, which is slightly better than duplicate publishing, but I still annoying. The regurgitant publications rehash things at the level of methodology, and really report nothing new other than very slight alterations that have no overall benefit to the approaches. Yet these seem to frequently get through the peer review process.

    Recently I read a blog suggesting that open source peer review- where authors simply post a manuscript online and solicit comments, seems to be working well. The approach attracts persons truly interested in the topic, and because they are doing the review voluntarily (opposed to being assigned papers by an editor) their input appears more informed and genuine. Thus I was curious if you endorsed this new approach, or see any potential in it going forward.

  • qaz says:

    "People keep telling me that there's no real evidence that prepub review improves the quality of published manuscripts."

    I'm sorry, but this just does not fit any data from ANY paper I've ever seen, either my papers or the ones that I've reviewed or that my friends and colleagues have written. You could argue that the original paper was good-enough, but I don't know anyone who seriously argues that the revision is not *better*.

    And what Doc24 said. The public already has enough trouble differentiating between real and pseudo science. Can you imagine if the papers all competed in public? Just think of all those "journalists" who do "he said/she said" (ok - it's usually "he said/he said") bringing together a noted scientist with real data and a wacko with something he made up out of nowhere. (Take for example that wonderful moment when the scientist explained something in very clear detail (Fukushima radiation would not reach California in dangerous levels) that was not the scare story the TV newshost wanted and she says, "well, we'll just have to agree to disagree". The shocked look on the scientist's face is priceless.) Can you imagine policy makers where every creationist or climate-change-denier or industry astroturfed fake was put on an equal level with real scientists?

  • qaz says:

    Also, post-publication review and pre-publication review serve different purposes.

    It would be good to find a way to make post-publication review a useful thing. But we do that already. We attempt to replicate data. We argue over interpretation. We construct theories that reinterpret data. We reinterpret data when new data arrives to make us rethink what that old data meant. (That's called "science", people!)

    @JB - arXiv is not a journal of record, even in physics. Adding a bio-arXiv might be a good idea (such a thing exists, but has not received buy-in from biologists the way arXiv has from physicists), but even the physicists do not use arXiv for their primary literature source. (How many papers in a typical journal article cite arXiv?)

  • Physpostdoc says:

    I disagree with the disparagement of arxiv as a source of citation. I am of course strictly talking about Physics.

    First, almost all the groups across fields in Physics post their articles on arxiv (including many groups beyond US). It is a means to communicate science to peers and lay claims to important works without waiting for the (occassionally lengthy) review process of peer review kicks in. This can not make any serious practitioner of science any less serious or cavalier about what they post online!

    Second, even with published articles, it is a great resource to have as it makes access to scientific articles much more democratic, without being bound by journal paywalls snd subscriptions. This is important for people in small places and developing nations who many not have such privileges readily available to them. Socialist argument aside, even scientifically, it allows people to retain longer/ full versions of the articles without much worry about compliance with length guidelines usually imposed by journals, which only appear in abridged modified versions when published.

    Third, and more relevant to the current discussion, posting on arxiv by no means can be used as an argument against/for peer review. Arxiv is not used, and probably not even intended, to replace journals. It in fact has provisions to update doi/journal information once the article is published and also revised versions to be uploaded with comments specifying the exact change or just saying "version accepted for publication".

    As an aside, arxiv has also recently come up with some safeguards such as peer endorsement system which only allows an author, who is new to arxiv to post in it, only if an 'established' scientist (= registered author of five or more arxiv articles in that field if study in past three years) endorses hir first. It is not a perfect way to weed out crackpots, but it is a step in right direction.

  • A. Tasso says:

    I publish a lot in global health -- at the interface between public health and social science of HIV, mental health, non communicable diseases (Tanzania, South Africa, Zimbabwe). And I would have to agree that for many of the journals where I publish (they would generally be considered top field journals or middle-tier journals, with IF range 2-5) peer review only improves my papers about 1/2 to 2/3 of the time. Because most of my research is based on data collected in other cultural contexts, very frequently the journal editor will assign a peer reviewer from a southern or eastern African country as a peer reviewer, and it is very clear that these reviewers do not know basic epidemiology, biostatistics, journal publishing conventions, etc. In fact these are often the *hardest* reviews to deal with because what they write is completely unintelligent -- and I spend a lot of my time trying to figure out how to say "you're an idiot" but very nicely and respectfully.

  • gmp says:

    the physics community has basically abandoned journals in favor of arXiv kind of implies that these aren't really a major problem in reality.

    The physics community most definitely does not consider arXiv an alternative to actual peer-reviewed journals. However, arXiv is indeed a very useful repository. People post their work there usually around the same time they submit to a journal for peer review, or sometimes post publication. arXiv gives people a nice compilation of what's new out there, regardless of where it will appear eventually (you can get daily email updates listing new submissions) and it's a good place to cite when the paper exists but it's still in review (e.g., arXiv enables you to easily cite your own or someone else's in-review paper in another manuscript submission or in a grant proposal). I certainly make sure to post on arXiv papers that are behind a paywall or otherwise might not be widely accessible. Sometimes colleagues see your stuff on arXiv and contact you with questions and comments, which is cool. But yeah, no one mistakes arXiv for peer-reviewed publications.

  • qaz says:

    @Physpostdoc - exactly!

    arXiv is a very useful resource. I wish biological journals would allow us to post there. As you say, it is useful for establishing priority and for allowing discussion to progress. As such it serves roles similar to that of SFN abstracts for neuroscience and conference proceedings for other fields. None of these replace journals or pre-publication peer review.

    The fact that people can cite arXiv when the journal article doesn't exist is similar to neuroscientists citing SFN abstracts when the journal article doesn't exist. But when given the choice, people always cite the journal article over the arXiv article or SFN abstract (or at least it is recognized that they should).

  • drugmonkey says:

    Remember when people would cite SfN abstracts qaz? Not so long ago..... Then the field decided this wasn't ok and you started seeing complaint about citation to "non-archival material". I think this cultural shift bears on how hard it will be to shift back to treating pre-pub manuscripts as "real".

  • qaz says:

    DM - People still cite SFN abstracts. I see it in lots of journal articles, particularly in reviews that address new topics, like Current Opinion and Trends journals and the like. A lot of the journals ask that SFN abstracts be listed inline rather than in the bibliography, but even in those journals, I often see SFN abstract citations in the bibliographies.

    You are right that I have stopped seeing citations to very old SFN abstracts (say more than 2-3 years). Whether that is due to the lack of availability (are they even still archived past 2-3 years?) or is due to some change in the citation zeitgeist, I don't know. The only explicit cases I know where people should have but didn't cite SFN abstracts were priority issues -- where person A refused to cite person B's SFN abstract and thus appeared to have created a result de novo, without prior literature. (Bad person A!)

    Nevertheless, I certainly still see citations of SFN abstracts for the last few years. I think this is very parallel to how physicists cite arXiv.

    I also don't think we want to think of pre-pub manuscripts as "real". I think we want them to play a different role than post-pub manuscripts.

  • jmz4gtu says:

    " I'm not really sure how this can ever not be the case (any personal anecdotes suggesting otherwise?)"
    -Probably not what you meant, but my guess is that reviewer demanded (as opposed to suggested) experiments are the most likely to be forged with fraudulent data. You're increasing the incentive for fraud dramatically if you tell some lab worker, "ok, this paper getting accepted and possibly our funding rely on this experiment generating a specific result..."
    I've only seen a couple reviews like that, where they demand not only an experiment but state that negative or uninterpretable results will not clear the bar for publication, but they do exist, and are likely doing more harm than good.

    On topic, I think the main motivations to edit post-pub is to address discrepancies between labs, accusations of fraud, or maybe put a piece of floating data up there that clarifies things but doesn't fit in its own story.

    The real question I think is who should be allowed to make the changes, should such a format arise. The PI, the first author? Any author?

  • Then the field decided this wasn't ok and you started seeing complaint about citation to "non-archival material".

    Pre-publication servers on arXiv and bioRxiv are specifically archival material, even if they are not peer-reviewed before posting. They've both gone to some trouble to make sure pre-publications on those archives will be accessible to everyone as long as a paper in any electronic journal is.

  • dr24hours says:

    For the record, when saying that "people keep telling me..." I was not intending to imply that I agreed with that position. My own papers have tended to be better post-review. But not all of them. Sometimes I was just jumping through hoops for annoying reviewers.

  • Jo says:

    > Remember when people would cite SfN abstracts qaz? Not so long ago..... Then the field decided this wasn't ok and you started seeing complaint about citation to "non-archival material".

    Oh c'mon. Next you'll be telling me that you can't cite personal communications.

  • Newbie PI says:

    I have no doubt that pre-publication review of papers usually makes them better, but in my experience, the experiments requested by reviewers rarely change the ultimate conclusion.

    The other problem I'm currently having with pre-pub review is that if you want to publish something in a decent journal that disputes results in a high profile paper, you will be destroyed by the reviewers. A paper we just submitted was assigned FIVE reviewers. The reviews were actually longer than the manuscript (not kidding). The primary critique is that in addition to all of the experiments we did, they want us to perform identical repeats of the experiments in the paper we are refuting. Not just identical experiments, but with reagents obtained from the other lab. I mean, really? Try getting plasmids and cell lines from a lab in China for the purpose of disproving their Nature paper.

  • thorazine says:

    qaz makes a really good point, one that I hadn't thought about before, about the likely effect of all-post-pub peer review on public scientific discourse.

    Otherwise, I've always hated post-pub peer review essentially because it's optional to participate. I agree to review nearly everything I am asked to; if post-pub review were the norm, I would probably review almost nothing - essentially because I don't have time, and I don't particularly like to get in fights with strangers on the internet. Post-pub disproportionately favors those who have too much free time (HHMI investigators?) and/or are internet trolls for fun.

  • physioprof says:

    I wish biological journals would allow us to post there.

    I'm pretty sure that Cell Press does not consider deposit of manuscripts in Internet archives to be prior publication.

  • MorganPhD says:

    What is the opportunity cost for time delays during peer review?

    I get so angry thinking about the new science that isn't being done as I spend 2-4 months polishing off Supplemental Figure 19D.

    I don't think we need to get rid of Pre-Pub Review, but we're literally wasting money and time NOT making scientific discoveries because we're inefficiently trying to publish the results.

  • Rheophile says:

    physioprof: Cell could be better. They say if you have questions, you need to contact the editor. However, Science and Nature state explicitly that arxiv-style preprint is completely OK:

    Publishing your arxiv manuscript when you submit the paper removes like 50% of the downsides of pre-pub peer review. Wish more people did it (and read preprints) in biomed.

  • poke says:

    Interesting analysis of opportunity cost of high impact publications:

  • Grumble says:

    I still cite SFN abstracts, but if it's been a few years since the abstract and the paper hasn't come out, well, that tells you something, right? So I don't cite old abstracts.

    The problem with treating SFN abstracts as a version of arXiv is that they are *short*. The story is presented in more detail in the poster or talk, but that really is not "archival material" - it's the very definition of ephemeral. On the other hand, in a field where there are 20,000 new results (i.e., abstracts) every year, perhaps brevity is useful for people trying to keep up.

  • Grumble says:


    That's a really interesting analysis. Here's the heart of the matter, an example from the author's own papers:

    "If I had first submitted this paper to the journal where it was eventually published in and had not lost time due to rejections and resubmissions, it would have been published 1.27 years earlier and would have accumulated approximately 70 more citations. On average, each resubmitted paper accumulated 47.4 fewer citations by being published later..."

    So it's a classic risky decision-making task: do you select the extremely high risk/high reward option, or do you select the slow-and-steady-wins-the-race option? It's interesting that drug abusers tend to do the former whereas non-abusing controls tend to do the latter. When it comes to glam, scientists' behavior has an uncomfortable propinquity to that of crack addicts.

  • mH says:

    We all know from sending manuscripts to colleagues that 5 or 6 different scientists will give you 5 or 6 different ways to make your paper "better." Who cares if the paper is "better" in the idiosyncratic way 2-3 reviewers suggest? Most of improvements are in clarifications of wording or emphasis, which in the long term is meaningless compared to the data.

    What journal peer review almost always does is make papers longer, with more experiments. This is incontrovertibly a waste of time.

    There are fields that basically publish every set of experiments they do as a separate paper--the equivalent of 1 figure in a prestige biomed journal. They're doing fine. (And their h-indices are insane.) Biomed has gotten itself so balled up in meaningless concerns about scope and story because it favors the large, rich labs and BSDs. Just get shit out the door and into your colleagues hands.

    I am in favor of plos one style pre-pub review-- the ideal version, not what actually happens there. Does it pass a sniff test? Are the data presented coherently and analyzed appropriately? Do they support the conclusions. Fini.

  • shrew says:

    The problem is not peer review, which is fucken essential. The problem is supplemental material.

    Supplemental material makes it possible for reviewers to demand experiments that are not going to change the central figures, but become part of the white noise of the supplement that is never properly examined.

    Supplemental material makes it possible for glam journals to demand a bolus of data only a ridiculously well funded BSD lab could support the production of. Normal human beings with feet of clay and only 1 R01 can't afford to produce dozens of figures to sacrifice on the pyre of the Supplement instead of publishing them as their own stories.

    Supplemental material takes Impactful (if you catch my drift) publications away from society journals, which used to be the repository of the full story when a Letter to Nature was 2 pages and 3 figures tops.

    My publications have Supplement sections, we all have to play the game. But the Supplement is the oubliette of science.

  • drugmonkey says:

    Demands for 5 regular papers' worth of supplementary materials is absolutely an intentional JIF gaming strategy from the Glams. Absolutely. Keeps the competition further behind.

  • drugmonkey says:

    mH- this is entirely within the ability of the authors to choose. If they want to do this, the peer reviewed journals are available. Not just PONE either.

  • qaz says:

    @shrew is entirely correct, but the problem is not supplemental material. If you look at GlamJournals, people are now packing multiple figures into single figures. So GlamJournalX may say you get 4 figures, but if each of those figures has 20 parts (yes, I've seen 20 unrelated parts in a single figure), then you are again, packing 80 figures of data into a single publication.

    The problem is a lack of respect for the concept of the scientific literature. Before the scientific journal there were only monographs. A monograph would take a decade for a BSD lab to write (think Galileo) or a lifetime for lesser souls to write (most scientists in the day put out one book). Because people would be scared of getting scooped, they would send anagrams and indecipherable codes to their friends so that they could say "see, I figured out the moon controlled the tides before you did, I just wasn't ready to publish yet".

    The scientific journal was created so that people could put *partial* stories out so that others could build on them. The problem is the Glamourization of science which has moved us to the concept that a single paper is a complete story or a complete breakthrough that dots every i and crosses every t (and puts that little curlicue on the base of every Q) instead of just a piece of the puzzle.

    Getting rid of supplemental material is a good first step for lots of reasons (don't forget that citations in supplemental don't count), but the problem is deeper than that.

  • geranium says:

    In my experience, Cell Press (or at least Cell) disallows preprints but everybody else I've submitted to is fine with it.

    This very handy site provides preprint policies for all journals:

  • A Salty Scientist says:

    I've started to see peeps in quantitative genetics post pre-prints to bioRxiv. Other than the journals that disallow pre-prints (which are becoming fewer and fewer), what is the downside?

    As to the question of incentivizing modification of a manuscript, I see several issues. First, is there even a mechanism to modify pre-prints, and how should this be handled transparently? For example, should there be version numbers? Second, some (most) people will not modify unless they are compelled to by reviewers/editors (and most pre-prints are unlikely to generate any comments). But we've seen papers "slip" past peer review with flaws, and we've seen scientists become defensive and double down on what they must know is a flawed interpretation of their data. This is a problem in science, and it's disappointing to see otherwise very intelligent people not willing to admit an honest error and just let the science move forward. Lack of peer review does not solve this problem.

  • Tal Yarkoni says:

    This question has been answered many times in many guises. See for example the various papers summarized here. To summarize:

    1. In the not-too-distant future, it will look very bad not to respond to post-publication criticism. Already, emails authors whenever someone leaves a comment on one's paper--which means that there's no way to pretend that you're not aware of someone's criticism. People already routinely comment on the absence of a response to trenchant criticism on PubPeer, and there is no question that failure to acknowledge concerns raises questions in readers' minds. This pressure will only increase as post-publication evaluation becomes more common and visible.

    2. As the role of conventional journals diminishes, it becomes much easier to post revised manuscripts--so the bar required to make changes to one's manuscript becomes much lower. I suspect that part of DM's concern is grounded in the belief that revising a manuscript after publication is a big deal. Well, it is a big deal right now, because you can't easily do it. Right now, if someone critiques your Science paper and points out a valid problem, there isn't any way for you to modify your results or conclusions accordingly, short of outright retraction--which is a disproportionate response to the vast majority of criticisms. This creates a siege mentality that leads authors to defend their paper at all costs, because there's simply no way to acknowledge problems with some parts of the paper but not others. Now compare that with a model that encourages and expects revisions. Already, we're seeing new publication outlets that don't have this problem. F1000 research, PeerJ, and a number of others already allow you to track multiple versions of the manuscript formally, including version numbers in the DOI. Under this kind of system, it's much less effortful or unusual to post a revision, so it's no longer something that requires a good deal of motivation.

    3. Extending (1), we will eventually have reputation systems that directly reward authors for being responsive and constructive. I don't mean that there will be some card-carrying authority that decides when DM has done an adequate job addressing concerns; I mean that there will be centralized, widely-used platforms featuring threaded conversations associated with votes and ratings, and the community will essentially decide what is a good response and what isn't (much as Stack Overflow users effectively filter the quality of questions and answers). It will be quite apparent in such platforms when someone is responding to criticisms constructively, and when they're basically AWOL. Put simply, there will be a very strong incentive to behave well, because your reputation will depend on it.

    The key point in all of this is centralization and visibility. The problem right now is not that post-publication review can't be an effective motivator of revisions; in the long run, it will be an order of magnitude more effective than the current pre-publication model. The problem is that for the above factors to do their job properly, there have to be sufficient eyeballs directed at a single platform to make the consequences of ignoring criticism meaningful. Now, you might want to argue that this will never happen, and that's fine (though I would happily take that bet, because it seems to me that things are already moving in that direction pretty rapidly). But the bigger point is that this isn't an inherent limitation of post-publication review, it's a reflection of the relative infancy of the post-publication model. That model will increase steadily in visibility and adoption in the coming years, until at some point we'll hit an inflection point where it becomes clear that everything that pre-publication once accomplished can be accomplished better and more efficiently via post-publication review. At that point, authors who ignore trenchant criticisms on their papers will do so at their own peril.

  • physioprof says:

    But the bigger point is that this isn't an inherent limitation of post-publication review, it's a reflection of the relative infancy of the post-publication model. That model will increase steadily in visibility and adoption in the coming years, until at some point we'll hit an inflection point where it becomes clear that everything that pre-publication once accomplished can be accomplished better and more efficiently via post-publication review. At that point, authors who ignore trenchant criticisms on their papers will do so at their own peril.


  • Busy says:

    I had a job dealing with typography long ago when it seemed like a pipe dream that one day authors would upload something close to the final version of the paper and journals would just tweak the file and send it to the press. And here we are...

    I was also there when arXiv got created, and looked like a pipe dream that people would voluntarily upload their pre-prints there and now there is 1 million of them, and in my field they are cited freely just like technical reports once were.

    I was there when Wikipedia took on Encyclopaedia Britannica in what seemed an impossible battle and yet Wikipedia won.

    I have yet to form an opinion on the post-publication process, but I wouldn't be surprised if it took off, in spite of how unlikely it might seem now.

  • AcademicLurker says:

    The thing is that post publication peer review already happens. It's called "science".

    The idea of post publication peer review 2.0 (peer review of the future!) through some sort of online comment section linked to manuscripts has been implemented several times now already, and it sank like a stone every time.

    The closest thing to online post publication peer review that I've seen takes place on individual blogs. For example, Rosie Redfield during the arsenic DNA fiasco, and there are a number of (theoretical) physics blogs where the blogger regularly goes over recently published work pointing out which assumptions or claims she takes issue with.

    I was initially intrigued by F1000, but haven't looked at it in years as it just didn't seem to have anything useful. I only heard of PubPeer because of the threads here, and checking out the site, I see that the vast majority of papers have only 1 comment.

  • Philapodia says:

    Who the hell has the time to post-publication review something? I don't have time to do all of the pre-pub reviews I get asked to do, and I get very little credit for doing that. The only people who I think would care are those with an axe to grind and direct competitors. Most scientists I know have too many other obligations to do anything like this in a serious fashion. Most of us just diss the competition Jay Z / Nas style, by taking each other's wack theories out in our own papers using our smooth flow and sweet rhymes.

  • Geo says:

    We already have pre-pub review. It is known as journal editorial boards and assigned reviewers. That is sufficient. A second layer of pre-pub review opens the process to amateur hour, or worse yet, meddling by those with a personal agenda.

  • Busy says:

    >> meddling by those with a personal agenda.

    because that doesn't happen today in pre-pub review, no siree bob.

  • Who the hell has the time to post-publication review something?

    Crackpot loons looking for spliced western blots to masturbate to.

  • Busy says:

    Who the hell has the time to post-publication review something?

    You could say the same thing about Wikipedia, pre-pub review, blogging, tweeting or writing blog comments for that matter... but hey reach a conclusion first, look for (bogus) reasons after.

  • What would keep people from doing post-pub exactly the way you do it pre-pub now? Just as now, it'll be a staple component of the way we filter, sort and discover the relevant out of the ~2M papers published each year, so the same incentives apply - only that those who actually are interested in the discoveries can get to them without having to wait 15 months for the one far-fetched control experiment showing what all involved already knew anyway.

  • Philapodia says:

    Pre-pub review is considered "service" or scholarly activity by most universities and is expected of active faculty members as a part of their job description. Post-pub review is not recognized by most universities (at least not mine), and if you are going to do these things with your name attached and be a good, rigorous reviewer it will take a good deal a time to do well with no official "credit" for do it. Some may have time for that, but I would get a lot more bang for my buck by publishing a paper that disproves someones crackpot theory than just commenting on it in post-pub review.

    Tweeting, blog writing, or blog commenting is more therapy than rigorous review, but does serve a purpose to release angst and frustration within the community. Wikipedia editing is for pasty white guys who live in their parents basements and collect Futurama memorabilia.

  • qaz says:

    I do pre-pub review to protect the scientific literature in journals that I respect and care about. Pre-pub review is a gate-keeper process. I am not ashamed to say that and I stand by that as part of my responsibility as a scientist.

    As I understand it, the goal of the proposed post-pub review is to destroy that gate-keeper process and turn all of science into a public free-for-all of twitter arguments and blog comments. Given the inability of the non-scientific populace to interpret science already, this seems to me to be a true disaster, unless we can [re-]construct some sort of post-publication gate-keeper process. But then we have a gate-keeper process that will make people unhappy when they have been kept out.

    As @AL said earlier, we already have real post-pub review. It's called science. It has a long time constant, but science corrects itself. Although many people believe that the long time constant is due to people with agendas, it is actually due to the fact that post-publication review (i.e. replication, re-interpretation, and integration with new data) depends on the construction of new experiments, new data, and new theories, all of which takes time.

    It seems to me that we could have pre-publication reports (like arXiv and SFN abstracts, cite them until the journal comes out), but only really act on journal publications that have been through pre-publication peer review, and that would solve the proposed problems. But that seems pretty close to what we're doing already.

  • Philapodia says:


    Did you ever know that you're my hero
    And everything I would like to be?
    I can fly higher than an eagle
    For you are the wind beneath my wings

  • Busy says:

    Pre-pub review is considered "service" or scholarly activity by most universities and is expected of active faculty members as a part of their job description.

    As it is in mine, but I have yet to hear of anyone ever, in any place, getting into trouble because they didn't do enough pre-pub reviews.

  • Pinko Punko says:

    "Thousands shout "Qaz, yes!""

    I'm paraphrasing a long remembered headline from "The Militant" seen in a bookstore window

Leave a Reply