Sneaking orphan data into a review article

I find that it is not uncommon for me to run across a paper that is nominally a "review" article but yet contains data that have not been published anywhere else.
Have you ever seen such a thing? How common is it in your reading?
The next question is how you view the ethics of such a practice.
Ethics?
Well, the first issue is whether the data have been truly peer reviewed; because you may assume it is the intent of the authors that the data be cited. As if they were peer reviewed data like any other.


Personally I assume that articles tagged as Review in primary research journals undergo the same review process. This is no guarantee, however, that the stringency of review is similar. In some senses it pretty clearly is not. It is quite obvious when what is an orphan figure or study has been included in many cases- and yet the paper was published. So if the data would not have stood alone as a research article...in some senses it has passed a lower hurdle in the peer review process.
But just because a dataset or figure is not part of a body of work sufficient for publication, this does not mean that there is anything wrong with the data themselves! And after all, the certification of "peer review" is not that the data are earthshakingly cool enough to warrant publication in the most elite of journals. The certification of the peer review process is that there is nothing obviously wrong with the data as such and nothing wildly off-base about the interpretations and conclusions advanced.
So I think it is perfectly fine to sneer about how data snuck into a "review" article wouldn't have passed muster for a stand-alone article. But it is improper to assert that the data have not been peer reviewed if published in a journal which submits the articles tagged as Review to the same formal peer review process as it does for primary research articles.

33 responses so far

  • Beaker says:

    I've done this on two occasions, and the circumstances were similar. I used microscopic images that were collected for experiments that led to a publication--but the exact images I used in the review were not the ones that got published. I see nothing wrong with this.
    The images used in the review better illustrated some global point being made in the review. There was no "data set"--there was just a nice picture worth 1000 words. Both reviews underwent peer review, and no reviewer ever commented on the source of the images. It also gets around having to ask the publisher of the primary article for reprint permission of the images under their copyright.
    If instead I had shown an unpublished bar graph collected from multiple experiments and including statistical analysis, then perhaps that would be stepping over the line for a review.

  • katherine10 says:

    "Well, the first issue is whether the data have been truly peer reviewed; because you may assume it is the intent of the authors that the data be cited. As if they were peer reviewed data like any other".
    IMO, it comes down to motives. If the motives are that the data be cited, that the data appear as if they were peer reviewed, there is a lack of truthfulness and one can also assume that, doing so, is scientific misconduct.
    A possible motive for including unpublished data in a nominal review could be to enhance the scientific content of the review or/and provide complementary or alternative views on a biological problem. It could also be done for "didactic purposes". If this is the case, the author(s) would alert the Editor on the nature of the data so that reviewers are aware of it and can provide their perspective on the appropriateness of data inclusion in the review.
    Such practice would not only be ethical but also an example of advancing the scientific process in its dual research/teaching aspects.
    Hopefully, I understood your post

  • A Reader says:

    Journals and editors (including me) often encourage the inclusion of original data in reviews. Unpublished data can, when used properly, make an interesting and compelling point, or with an unexpected observation highlight an area begging for further work. Every review that I've ever written, and every review that I've acted as editor for, has been peer-reviewed. News & Views-type stuff is typically only editor-reviewed.
    When I peer-review a review article, I admit that I am not as stringent as I am with research articles. Generally, I assume a review is going to be published in any case, so I just try to offer suggestions for improvement. Except, admittedly, a few days ago I reviewed a review and was forced to say that it was not suitable for publication as is. That review, incidentally, had 3 panels of original data. I had no problem with the data, except that some of the experimental results had been previously published by others, and those others were not cited. My main problem was that the authors didn't seem to have thought very hard about what they were writing about, and unnecessarily confused things more than clarified.

  • I was about to do this for a review article where I might toss in a figure or two. Instead I decided to gather some more data and break my data out into another paper and try to get that sucker published before my review article. It just doesn't feel right to have original data in my review article so I'm more comfortable with the way it shook out. Besides I can do a little more work and turn one publication into two.

  • cookingwithsolvents says:

    As my name indicates, I'm a chemist so I'll give my perspective from the non-bio realm.
    It really depends. I'll give an example where has been and is very accepted to put some additional data into a review article: Accounts of Chemical Research (an ACS journal). Sometimes it is as simple as 'we tried this and couldn't get it to work so we tried something else and now it's an ACR article!'. Other times the data is more relevant to the "big picture" and I imagine occasionally some data was obtained just for the ACR article, though I doubt that's common (and having new data in an ACR article is semi-rare to begin with).
    I would be far less happy about data in (e.g.) a Chemical Reviews article, which are more thorough reviews generally with far, far more references than an Account. I'll give one example off the top of my head where additional data found a way in: "Chemical Redox Agents for Organometallic Chemistry" (DOI: 10.1021/cr940053x) is an EXTREMELY useful review in which a lot of redox potentials are referenced to each other and in different solvents. There is a decent amount of previously unpublished data that I am thankful to have access to.

  • Pinus says:

    A bit or two of interesting results that contribute to the author's viewpoints is okay by me for reviews.
    What really pisses me off is when authors make statements followed by (data not shown). I recently had the pleasure of reading one such 'review'. If the authors actually put in the (data not shown) it would have been nice...but instead they tried to backdoor this data in. Low and behold, these authors recently published a paper composed of this not shown data from the review. I am not really sure what to think about the ethics of this kind of maneuver, maybe it is okay to some?...but for damn sure if I was reviewing this more recent paper, I would have rejected it on the basis of previously published results.

  • Paul W. says:

    I've done this in both the major review articles I've published, but was careful how I did and presented it---very explicitly---and nobody ever objected. In fact, all the feedback I got about it was very positive.
    In both cases, the original contribution was to run some detailed discrete event simulations (based on very precise empirical data) and plot the results in multiple dimensions, so that people could see striking patterns jumping out actual data, which had not been adequately described before. (Not even close.)
    I got a lot of responses along the lines of "wow, so that's what the data really look like!" and "so that's what people have been arguing about!"
    The major criticism I got was from senior colleagues who thought I was stupid to give away such cool original shit for free in a survey paper---I should have packaged it up as a separate paper or two, and gotten more publications out of it.
    I didn't see anything even a little bit unethical about it, because I was very clear on what I was doing, and the reviewers knew to be on the alert for any sneakiness in support of grinding my own particular axes on the subjects.
    (I was also pretty sure that the reviewers included top folks on both sides of any controversial areas, so it was about as good a way as any to do some air-clearing and de-muddling in a carefully refereed way. None of the reviewers disagreed.)

  • DrugMonkey says:

    I really, really hate (data not shown). Now *that* is an unethical practice.

  • New Asst. Prof. says:

    Perfectly timed post, DM. I'm wrapping up a review article in which, as written, I have inserted two bits of data. One, to me, was an easy yes (a figure summarizing our brief re-analysis of another group's published, publicly available microarray data, cited appropriately). The second has turned into an unpublished observation, a statement to the effect of "We have recently found that X mRNA expression is significantly increased in the Y model (reference) of Z resistance." I'm not bothered by this ethically either; I'm not making any broad claims, merely making a statement that will corroborated and backed up by mechanistic data in a future paper.

  • TreeFish says:

    It's common in systems and behavioral neuroscience. Dick Thompson was/is famous for it!

  • anonymous says:

    What's common in systems and behavioral neuroscience ?. " Data not shown " ??.
    Dick Thompson is an extraordinary high influential guy !!

  • I really, really hate (data not shown). Now *that* is an unethical practice.
    I think it depends on what type of data it is. I've done this once in a paper- it was a paper comprising a lot of work and had a lot of good figures. The journal capped total number of figures to 8- so I took the most mundane data, which had been obtained through wildly standard means- data that would not have surprised anyone- and pulled the ol' (data not shown).
    I don't see any ethical issues when the results obtained are expected and obtained through standard, established methods.

  • DrugMonkey says:

    when the results obtained are expected and obtained through standard, established methods.
    Why do we conduct controls, CE?

  • pinus says:

    I have put in (data not shown) in papers. Mostly a boring experiment that a reviewer wanted that yielded a negative result. This 'review' I am referring to had multiple, positive, and interesting results, referred to in such a fashion. With page limits, I understand the need to put such things in, but at the same time...but in general, interesting positive results should be shown.

  • Neuropop says:

    I have put in original data from simulations in a few reviews that I have written. Mostly these were simulations that merely illustrated a point that logically followed from results in previously published papers. These were peer reviewed in some sense. But here's another gripe: what do you make of statements in a REVIEW that say "We have used blah-blah cool technique to show that xyz happens -- data not shown" and then goes on to devote an entire section to implications of that unseen data. Note that this a one line statement of a potentially difficult time-consuming technique/experiment that could have major implications for the field (and not to mention something we have been working on for a long time!)

  • Why do we conduct controls, CE?
    It is probably hard for me to illustrate my point without providing specific examples. This was not a control-type experiment, and it served a different purpose than a control.
    Let's say that I am studying how to prepare guacamole in the most tasty way possible. One criterion, before I begin such an experiment, is that the avocado needs to be ripe. I'd be inclined to write in a manuscript "Avocados were tested for ripeness using the standard finger press method. All avocados used for further experimentation were determined to be ripe at the time of usage (data not shown)."
    Is it really unethical for me to not show actual pictures of me depressing the flesh of the avocado with my finger to *prove* that they were actually ripe? This is not at all the point of the manuscript (which focuses on tasty guacamole), and the data was obtained through entirely standard means. Therefore, the data can be ethically repressed.
    What I'm trying to say is that the legitimacy of including data not shown is very situation-specific, and although it may be unethical in certain scenarios, I do not find it unethical in the majority of the cases in which I encounter such a thing in the journals that I read.

  • dan says:

    I see this frequently, and as a reader it drives me batty. I don't think it unethical. The problem for me is that the details are inevitably missing. It can be impossible to figure out how that one "standard" analysis was done from what is in the paper. And when we can't even figure out how to try to reproduce the results, scientific communication has failed. If there was an online supplement to back it all up, I would feel differently. I've never run across that in a review article.

  • Pascale says:

    Data not shown can be legitimate. For example, in survey studies, you will compare respondents with those who do not send back in their survey for a number of characteristics. Most papers I have seen review precious tables and figures for data that make a point. They will state that "respondents and nonrespondents were similar for age, gender, and faculty level (data not shown). A completely legitimate, space-saving use.
    Trying to make a paradigm shift (data not shown)? That is bogus.

  • DrugMonkey says:

    CE, there are many parts of a scientific paper for which one could conceivably show the data but the tradition is to dump it in the Methods with a simple statement. In rodent behavioral pharm for example.. "The animals were male StrainX rats of average weight Y" or more experimentally "Rats were maintained at 85% of their free feeding weight". or even more specific "all animals acquired [target behavior] in N sessions".
    None of this stuff requires the author to refer to data "not shown"!
    I believe you test the avocado for ripeness. I believe you know how to construct your stupid buffers. I believe you know how to handle the research subjects you use. etc, etc. So does the enterprise of science. If it did not, we'd better get cracking on how to present all the trivial methodological "data" within each and every paper.
    So in a scenario where you feel it necessary to "cite" that there are data...what does this mean? That you would otherwise think it necessary to show these data in the paper? Obviously. So why are you not showing us but using this "data not shown" dodge?
    1) Page limits are just an excuse and point to the way that GlamourMag BS is poisoning the proper communication of science.
    2) The data exist but are not really "pretty enough" and you know that will be a problem for this journal/field/reviewers. Grrrrr.... don't even get me started on how corrosive to the conduct of good science the notion of "pretty data" is.
    3) The data don't actually exist (i.e., don't really stand up to the standard under which you are attempting to deploy them)
    4) Ok, actually you don't think it needs any cite but you kinda read those in other articles so you think you should too... AGGGHHHHH!!!!! Stop the cycle of insanity!!!!! NOW!!!!!!!!!

  • Including otherwise-unpublished primary data in review articles is fucking cheezy. Those who have a habit of doing so are, rightly, perceived as cheezeballs.

  • k10 says:

    CPP,
    If I correctly understand the meaning of cheezeballs I would think that including unpublished primary data is good. Of course, as long as perception equals reality.

  • Venkat says:

    If some says 'we found that x happens in y conditions (unpublished observations)', I'm cool with that. Of course, one should not cite this review for that piece of data, but wait for their future pub (if it comes out) to cite it.
    However, that piece of info in the review is still useful, as it may guide other people's thinking/experiments.

  • 1) Page limits are just an excuse and point to the way that GlamourMag BS is poisoning the proper communication of science.
    I agree, DM, but how is your typical author supposed to get around this? You need massive sums of data to even make it into a top journal, but then the journal won't allow you to present all of the data. It is a catch 22, and I'm not sure what the author is supposed to do about it. Yes, you can present stuff in the Supplementary Info, but I have found, in my experience, that even this is sometimes insufficient.
    So what do you think should give?

  • neuro post-doc says:

    Yes, I admit I've done the "data not shown" thing, but only in instances where there was a sort of non-result that further experimentation flushed out more appropriately. For example recently in a publication we tried a technique that several (read most) people in the field use but is fraught with limitations. These limitations prevented us from actually making an observation. We then went on to use another method that allowed us to make said observation and then also to examine the observation under experimental manipulations (it goes without saying that this was appropriately controlled.) We had to mention the original experimentation method because the field would have thrown a shit fit if we hadn't tried it and then we gave our hypothesis why in our hands it was a failed method. While we could have shown this data, it seemed to us that someone who just picked up the paper and looked at the figures would be confused as to why we had apparent conflicting results (observation with new method vs. no observation with old method), when that was not the case. Thus in this case we went with the "data not shown"....although it's a bit besides the point and I do agree with your opinion DM on glamor mags....we did have 3 sup figs in the paper as it was due to space limitations and I fear that some very relevant and important data are buried in those sup figs

  • DrugMonkey says:

    So what do you think should give?
    I think all proper scientists should refuse to publish their work in journals for which the threshold for publication demands more data than can be included in the publication itself.

  • neurolover says:

    I've done "data not shown" -- usually when a reviewer asked for a stupid piece of data that doesn't say anything about the main point of the paper, and is irrelevant to the conclusions (but that they either just want to know, or have some nutty idea about). For example, when they demand to know the breakdown of the tastyness test based on whether the avacados were californian or mexican, and none of your statistics are any different for the subgroups.
    Now, I do believe that when you say "data not shown" anyone should send you a request and you should send them the data.
    But then, I think that published data sets should be shared to the extent possible by physical and time limits, and even, further, that NIH should coordinate and fund the sharing of NIH-funded data.

  • I think all proper scientists should refuse to publish their work in journals for which the threshold for publication demands more data than can be included in the publication itself.

    Do you consider "included in the publication itself" to be satisfied by supplementary materials? Because I am not aware of any journals that have a hard limit on supplementary materials.

  • ginger says:

    I have a (post-doc) colleague whose as-yet-otherwise-unpublished dissertation data were used by her advisor in a review article on which she was not credited as an author. That certainly sucked.
    I'm in epidemiology, and I end up using the "data not shown" for sensitivity analyses and re-runs of the same models with eighteen different confounding variables that ultimately all result in the same coefficients for the variables of interest - stuff that supports my main findings but isn't essential. If I can fit it in, I'll toss in a representative example but if I'm already pushing the journal's limit on the number of tables, I expect my readers to take my word for it. Where a journal offers an option to publish appendices online I'll put the data in there, so that it's at least "data available online at journal.com".

  • DrugMonkey says:

    Do you consider "included in the publication itself" to be satisfied by supplementary materials?
    No. "supplemental materials" are a blight on the body scientifique.

  • A Reader says:

    My easiest response to reviewers ever concerned 'data not shown' for a GlamourMag pub. The main reviewer complaint was the fact that I said 'data not shown' for one point (which was basically along the lines of: Nobody has ever seen evidence for X, and I tried this and this and that and didn't see evidence for it either (data not shown), so...). The reviewers wanted the data not shown included. I'm certain they thought I didn't have it, or that it didn't pass muster. But I did have it, and it was gorgeous data! I was using it already in talks but didn't want to include it in the manuscript due to the size and because I didn't think it necessary. So anyway, per reviewer requests, I stuck in this humongeous 3/4 page multi-panel figure, and told the editors sorry about the manuscript size limits but the reviewers wanted it in there. The whole revision took me about 4 hours, but I sat on the paper about a week so the editors didn't think it was too easy.
    My supervisor at the time had a general piece of advice for manuscripts, which was to intentionally leave something important out, so the reviewers will have something obvious to ask for, which you can then easily provide. I guess in theory that's good advice. Trouble is, I never seem to be able to guess what the reviewers will want. My only successful tactic is to basically leave out the discussion (I generally hate discussion sections anyway), and let the reviewers write it. Basically, they say some BS thing in the review, and I 'address the concern' by including a BS paragraph in the discussion. This works surprisingly often, except one reviewer once asked why the manuscript basically had no discussion. No problem -- I added one 😉

  • No. "supplemental materials" are a blight on the body scientifique.

    Why? Who gives a fuck whether the data's in the main body or supplemental material?

  • lamar says:

    what about citing meeting abstracts? how about citing ten-year-old meeting abstracts that never got published(!)?

  • DrugMonkey says:

    Obviously I care, lab partner...
    Meeting Abstracts...meh. Used to be that the SfN abstract was citable. So I'm used to that, haven't thought about it much recently. Definitely annoying to see a cite to an abstract in the older lit and have no way to see the actual data in a subsequent pub. Guess that's where I'd rather it was snuck into a review than not being available...

Leave a Reply