Seriously? They think this three month study is worth a publication?

Sep 12 2010 Published by under Peer Review, Science Publication

A question has arisen, my friends, yes it has.

When you are reviewing a paper do you ever think to yourself that the authors just haven't done enough work to justify a paper? Is it a criterion for you, implicit or explicit, that a paper must require at least a certain minimum amount of time spent doing the experiments?

Or does it never even cross your mind to think about how long it took to come up with the results? Is your standard based on what is being shown and how cool and important (timely? novel?) it is?

20 responses so far

  • I guess the first question is what journal is it going into? If its a journal with a strong IF then that might be in the back of my head.

  • Dr Becca says:

    When it's clear something was incredibly labor-intensive, I usually note that. But as for the opposite, how can you ever really know? Manuscripts are the neat and tidy version of what happened--sometimes things are much more work than they appear on the surface. I think as long as there's a story, it's worthy of a paper.

  • In my opinion, the time spent on the experiments shouldn't be among the criteria for publications.

    In fact, it isn't.

    At least I have never seen it either in the instructions given by the editors to me when being a referee or when reading guidelines for submitting a work to a certain journal.

    What is important is the novelty of the experiment, results, interpretation, and the overall scientific quality.

    Sometimes great ideas are very very simple.

  • PerrottiSanchez says:

    All the time, DM, all the time. The worst case scenario sometimes seems as tho a PI says to his or her lab: "Folks, I know what we're gonna do this week" Monday, design experiment Tuesday, run said experiment, Wednesday analyze data and write it up, Thursday submit, Friday celebrate. Although, perhaps that's just some of the bitterness that comes from those of us who do behavioral neuroscience type research AND insist that things are carried out correctly. For example: without cutting corners and/or removing data points or animals we feel "blemish" our graphs, designing the experiments with proper control and essential environmental conditions etc, regardless of how much longer it may take to complete. Thus, it takes us some time to put something out there that we feel confident about. Just sayin...

  • proflikesubstance says:

    It would never cross my mind to give a shit how long it took to get the data. Is it a story or not? Done.

  • drugmonkey says:

    behavioral neuroscience that takes a long time PerottiSanchez? Whey I have no idea what you could be talking about...

    /sob

  • Neuro-conservative says:

    I think this is your issue, DM. It somehow seems related to your beef with teh GlamourSkienz, but effort expended is totally irrelevant to the quality of the story.

  • David / Abel says:

    Even if it mattered - which I don't believe it does - it's a fools game to try and estimate how long a given series of studies took to complete in another laboratory.

  • namnezia says:

    Of course it doesn't matter, it only matters if there's sufficient data. In my field, it is not uncommon for people to collaborate in projects over the summer at a specialized facility, and often times these result in a paper. You have no idea how hard or little someone worked on a project based on the time spent on it, then only thing you have to go in is how complete the story is!

  • Mizumi says:

    A different version of this question comes up when, as happens in my field, someone gets 8 papers out of analyzing one and the same data set, from one batch of subjects. But you might also say it's a good use of research animals.

  • Travis says:

    I can see there being an issue when there isn't enough work to justify publication, but does that really have anything to do with the amount of time it took to collect the data? It seems that there is either enough data or there isn't, but I don't see why the time involved should influence the reviewer's decision.

  • lylebot says:

    HAHAHAHA

    I can churn out a paper's worth of experimental data in an afternoon. I just naturally assume that everyone else in my field can do the same. I'm more suspicious of something that looks like it took a really long time (since that means they probably just got a significant result due to multiple comparisons).

  • Pascale says:

    In general it doesn't matter at all how long the experiments took. I will give "extra points" to something that clearly took a long, long time- for example, a 12-month protocol in animals or a 1-2 year study in people.

  • drugmonkey says:

    I think this is your issue, DM. It somehow seems related to your beef with teh GlamourSkienz, but effort expended is totally irrelevant to the quality of the story.

    Heh. Actually N-c, I'm not a big fan of the "minimum effort" idea, pretty much for the ideas laid out already by the commentariat. Pilot studies, blind ends, methods development...all of that and more may be the subterranean foundation on which a "three week study" is built. Also, I do think there is a place for genuine novelty or innovation that will catapult a field forward or in a new direction, even if I don't think we need to reify it to the extent we do re: GlamourMags. And sometimes even if there is something reasonably pedestrian and expected ("everyone knows that...") there can be value in actually publishing a good test of the hypothesis if nobody has actually tested it.

  • drugmonkey says:

    Aaah. Interesting Pascale. So even if a study ends up being boring or flawed or something, does the fact that a lot of money and effort was poured into it justify publication?

    The devil is in the details of just how boring or just how flawed but I have to agree with you that I'd also be inclined to give a few extra credit points to something that was laborious, used special resources, etc.

    I think if I had to boil my approach down to the essence, the question for me is really whether I want to see "Figure 2" in print as part of the scientific literature. If I see so much as one solid figure that meets this criterion for me, and there is nothing actually flawed about the rest of the paper that would undercut the finding or generally lead the field astray, I'm three quarters there on the manuscript.

  • Sounds like a spin on the LPU (least publishable unit). To me, I don't think that length of time from experiments starting to experiments ending means much to me. Since I wasn't there I realize I could be grossly underestimating or overestimating the effort expended in any one experiment. Sure constructing a plasmid could theoretically take less than a week, but perhaps in most plasmids the gene turns out to be lethal when overexpressed and they tried five different ways to get it whole but non-lethal before settling on the workable method. I really don't need to hear about the five failed attempts (expect perhaps to note that this gene cannot seem to be expressed in high-copy number plasmids).

    For me, the questions are ... what is the problem in the field, what are the objectives of this research paper. Are those objectives adequate, and then are they sufficiently addressed in the paper. If they are, then I don't care how long it took to do the research. If they are not, then I also don't care how long it took to do the research either. Just my take on it at any rate.

  • Dario Ringach says:

    For me, the amount of time people have spent thinking about the problem is much more important then the amount of time needed to collect the data. Are they brining in new concepts? New ideas? A new way to test a novel hypothesis? Or are these just more measurements?

    In electrophysiology, unfortunately, the number of neurons needed to publish a paper is inversely proportional to the difficulty of the technique. So I'd say there is data to support the notion that reviewers also consider the time dedicated to data collection as well. Too bad.

  • becca says:

    "So even if a study ends up being boring or flawed or something, does the fact that a lot of money and effort was poured into it justify publication?"
    Yep. IF it saves somebody else the effort of trying it that way!!!

    In many settings, three month experiments will simply NOT answer any unexplored questions in a field. In which case, it's fair to send it back. But if three months is enough to tell that something is a blind ally, or alternatively that your pilot study worked and now you need an NIH grant and not just piddly foundation money, yeah, it's a good time to publish.

  • qaz says:

    Why would anyone care if it took only a little time to do the work? Science quality, impact, and importance is not measured in terms of weeks, it's measured in terms of quality, impact, and importance.

    Personally, one of my most-cited papers took a day to discover, about a week to check the analysis, and about two weeks to write. I know a researcher who would spent most of his year doing clinical work and about two weeks before the due date for the annual conference that he liked to go to, he would do some experiments, write it up, and send it off. His papers had a huge impact on the field I was in before. (In that engineering field, the conference was the highest impact publication at the time.) Certainly not every field can do this. If your experiment entails watching the life-cycle of a year-long-living species, it's going to take a year to do your experiment; three months doesn't cut it. But in other fields, the limiting factor is not the experiment itself.

    Now, it may be that what you mean is that you don't think they've done enough work on the project. That doing it right would take about a year and that they're publishing too early. So, in effect, the study took too little time. It may be that this is (as mentioned by Thomas Joseph above) an LPU problem. But in both of those cases, the problem is not the time spent, it's a problem with the quality, impact, and importance.

  • Joe says:

    To build on what Becca said --

    If three months into their study, they found that something went horribly wrong, and they're warning all other people from trying it without being prepared, then well, it might be worth it.

    Just think of how much time's been wasted because someone did a study, they got nothing useful out of it, and so they never published anything. How many more grad students are going to be subjected to having to repeat those same experiments all over again, because it's not a well-known dead end?

    We need a place to collect & publish these studies. They might not be ground-breaking, but the fact that they've been done is useful. (and well, maybe someone might have insight a factor that might not've been accounted for that resulted in "bad" result). Maybe "open notebook" practices will be enough.

    I'm waiting for more issues of Rejecta Mathematica, and think that other fields should have something similar.

Leave a Reply