Authors fail to illuminate the LPU issue

We most recently took up the issue of the Least Publishable Unit of science in the wake of a discussion about first authorships (although I've been talking about it on blog for some time). In that context, the benefit of having more, rather than fewer, papers emerging from a given laboratory group is that individual trainees have more chance of getting a first-author slot. Or they get more of them. This is highly important in a world where the first-author publications on the CV loom so large. Huge in fact.

I've also alluded to the fact that LPU tendencies are a benefit to the conduct of science (as a group enterprise) because it allows the faster communication of results, the inclusion of more methodological detail (critical for replication and extension) and potentially the inclusion of more negative outcomes (which saves the group time).

I have also staked my claim that in an era when most of us find, sort and organize literature with search engine tools from our desktop computers, the "costs" of the LPU approach are minimal.

The recent APS Observer reprinted a column in the NYT that I'd originally missed entitled "The Perils of 'Bite sized' Science" (MARCO BERTAMINI and MARCUS R. MUNAFÒ; Published: January 28, 2012 ). Woot! No offense, commentariat, but you've done a dismal job so far of making an argument for why the LPU approach is so bad or detrimental to the conduct of science, particularly in response to my reasons. So I was really stoked to see this, in hopes of gaining some insight. I was sadly disappointed.

The author start off by putting themselves in a hole:

In a 2010 article, the psychologist Nick Haslam demonstrated empirically that, when adjusted for length, short articles are cited more frequently than other articles — that is, page for page, they get more bang for the buck. Professor Haslam concluded that short articles seem “more efficient in generating scientific influence” and suggested that journals might consider adopting short-article formats.

I quote that because their first argument relates directly:

Suppose that the scientists who will cite your studies will cite them in either format, either the long article or the pair of shorter articles. Based on citations, each of the three articles would have the same impact, but on a per-page measure, the shorter articles would be more “influential.” But this would reflect only how we measure impact, not a difference in actual substance or influence.

Yeah, so what. The "actual substance or influence" is the same for the pair versus the single paper. These are made up of the same data! So the only possible difference is....that's right, the entirely circular argument that a bigger paper is "better". We can dismiss this as tautological.

On to the second main point:

we challenge the idea that shorter articles are easier and quicker to read. This is true enough if you consider a single article, but assuming that there is a fixed number of studies carried out, shorter articles simply mean more articles. And an increase in articles can create more work for editors, reviewers and, perhaps most important, anyone looking to fully research or understand a topic.

PhysioProf brought this up on the prior post and it is arguable when it comes to the editors, reviewers and (PP's point) in submission. But the last point I "challenge". It's utter nonsense in the PubMed/Scopus/Google Scholar/Mendeley/etc era.

third:

we worry that shorter, single-study articles can be poor models of science. Replication is a cornerstone of the scientific method, and in longer papers that present multiple experiments confirming the same result, replication is manifestly on display; this is not always so with short articles. (Indeed the shorter format may discourage replication, since once a study is published its finding loses novelty.) Short articles are also more likely to suffer from “citation amnesia”: because an author has less space to discuss previous relevant work, he often doesn’t do so, which can give the impression that his own finding is more novel than it actually is.

The first point is possibly unique to the psychological sciences and it is true that the "long form" in this subfield (see Journal of Experimental Psychology titles for the model) tends toward multiple self-referential replications with minute variations. This has some benefits but it can get boring, btw. It is possible that lasting scars from reading all those "full story, substantial papers" in an early training stop may have sowed the seeds of my current attitude, I will confess. Novelty? Meh. I think we need to move away from this anyway. And look, encouraging more space for LPU articles is not saying we should do away with or prohibit long, long papers. Not at all. You want to do that? Go ahead. (Just don't force your poor trainees to pay the price, is all). Citation amnesia? Yeaaaah, well let's just say longer form articles are no protection against that! Seriously though, in my view the idea of a LPU is distinguished from character count limitations of articles that are "Brief Communications" or whatnot. To me the LPU tends to address the science- the number of figures and experiments if you will - and not so much the length of the Intro or Discussion. What's another paragraph of Discussion? or even an added page? Again, we are moving to electronic publishing/distributing format in which the costs of adding pages is starting to look like a very bad argument.

The authors end up with a more substantive, scientific concern:

Finally, as we discuss in detail in this month’s issue of the journal Perspectives on Psychological Science, we are troubled by the link between small study size and publication bias.

I dunno. I think they are goal-post moving here. The idea of "Least Publishable" means that it is, well, Publishable. That the data that you are accepting for publication are good in and of themselves. The authors seem to be decrying a situation in which there are not really enough subjects. Of suspecting that "Least Publishable" really means a change to poster-science where any preliminary result that manages to get you a p<0.05 is good to go. I don't see it like this at all. For me the emphasis is that what we are discussing is a good finding, up to the standards of your field. It would be a perfectly acceptable figure or three to include in what people seem to mean by a "complete story". It's just that there are one to three figures instead of seven (plus eleven supplemental figures).

12 responses so far

  • Busy says:

    I've been arguing since the inception of the web for the more AND less model of publishing:

    We publish a lot more results in a lot more detail since publication costs are low, yet we have less papers to read since automatic filters such as search engines and topic preferences afford us to see less papers that we do right now under the current push model publication system.

  • Alma Dzib Goodin says:

    Very interesting article. I think there is another angle. If scientist write for themselves, they don't need so many words, but when you think about readers, you need to explaing a little bit more. I don't mean thinking about someone who knows nothing about a topic, but maybe adding another image would help.
    It's true, now we have the tweetdeck and we read as much as we can during the day, plus, books, and all our resourses, but short, doesn't mean enough. Short can be good for some writers, but not enough for readers.
    At the end, what do we want to comunicate?

  • Isis the Scientist says:

    "...any preliminary result that manages to get you a p<0.05 is good to go."

    Yeah dudez! We all know that p=0.0001 is waaaaaayyyyy better than p=0.02! Now go do 10 more subjects!

  • drugmonkey says:

    You already got your troll for the week, Isis. Take your wins and leave the table.....

  • FunkDoctorX says:

    To some extent, I think, as you mention, some of the author's critiques might be more specific to human psychological research. I mean, c'mon, a paper published with one single study (I read this as one experiment) seems a little on the light side to be considered a publishable unit. With multi-part figures one can easily fit in replications of a key finding along with an extension, even in a brief communication that allows 1-2 figures.

    But on the other end of the spectrum, these papers that have more supplemental figures than figures in the published text are just ridiculous to slog through. Given that these types of papers are really only in CNS and journals just a step down from them, I'm not sure I understand how figure number and novelty/impact have become confounded with one another. It seems to me that these journals should have a line in their "author preparation guidelines" stating that for a manuscript to be considered for publication you must have a figure:supplemental figure ratio of < 1.

  • Isis the Scientist says:

    You've clearly made this too easy.

  • arrzey says:

    Another benefit of the LPU is that if you have a lab with multiple trainees working on vaguely or not so vaguely related projects, everyone gets a first authorship on something (rather than waiting to combine into one big paper, with the Big Dog as first author) (and we are NOT going to open a discussion on the travesty that is co-first authors). Right now, I'm far more concerned about my postdocs getting published & getting a job, than I am about how *I* look.

  • Eli Rabett says:

    FWIW, an old model was LPUs as things happened, brought together in a VLPU (very large publishable unit) at the end of the project. The LPUs got cited over the course of the study and then disappeared as cites after the VLPU appeared. Refereeing was a lot tighter on the VLPU

  • bacillus says:

    In the pre-electronic era, many of the journals in my field had "notes" or "brief communications" sections which were ideally suited for LPUs. I've noticed that some of them have dropped this choice.

  • Confounding says:

    The arguments against LPU-type publications based on replications don't carry much weight with me. "We performed a couple experiments that showed the same thing" doesn't actually do much to serve repeatability - they're all done on the same sample, by the same lab, with a high likelihood of the same screwups, accidental or intentional.

    Repeatable science is served by other research teams finding similar effects. If anything, smaller, more detailed study reports make that easier.

  • Drugmonkey says:

    Well, in the behavioral end of things, within lab replication often involves a new group of subjects and at a different time. This can screen out several important sources of experimental bias.

  • [...] Authors fail to illuminate the LPU issue [...]

Leave a Reply


3 × four =