Representative Images

Apr 15 2016 Published by under Uncategorized

New rule: Claims of a "representative" image should have to be supported by submission of 2 better ones that were not included.

It works like this.

Line up your 9 images that were quantified for the real analysis of the outcome. In the order by which they appear to follow your desired interpretation of the mean effect.

Your "representative" image is #5. So you should have to prove your claim to have presented a representative image in peer review by providing #8 and #9.

My prediction is that the population of published image data would get a lot uglier, less "clear" and would more accurately reflect reality.

55 responses so far

  • AnonNeuro says:

    Or "image standard deviation" -- you have to submit #3 and #7 to let readers assess variability.

  • drugmonkey says:


  • joatmon says:

    Do you stick those additional images in the main figure or supplement?

  • Nat says:

    with the relatively small sample size in many biomedical experiments, I think showing all the results isn't unreasonable

  • drugmonkey says:

    joatmon- for peer review only

  • jsnsndr says:

    I've been toying with the idea of providing all/most/many of our images elsewhere, eg on an insanely extended supplementary blog post devoted to a given paper...

  • joatmon says:

    DM- I don't know how you can do that for journals like J Neurosci? You can probably include that in the response to reviewers' critiques but how do you create something with the first submission. How do you indicate a suppl figure that shouldn't go into production/online? Also do you worry about increasing the burden for the reviewers? I just spoke with a reviewing editor recently, he/she said that most of the time the suppl materials are not even being looked at by reviewers. Everyone should include only quantitative data rather than qualitative or anecdotes.

  • drugmonkey says:

    On this last we agree

  • SidVic says:

    exemplary can be substituted for representative. i.e good example.

    If someone wants to cook the books by cherry picking images- they can do it. The idea of posting raw data is just ridiculous. One can job the raw data. Or just produce bar graphs outa thin air (no real way to get caught here.)

    A certain amount of trust is required. Also reputation for integrity is paramount. Once someone is shown putting out jobbed data their career should be over. Consistently producing data that does repeat... should raise eyebrows and negatively impact reputation. PBrookes did a real service to the community with his blog. I am increasing seeing alot of data i simply do not believe. Although the vast majority is in grant applications not publications.

  • shrew says:

    The idea of any data being provided "for peer review only" is ridiculous. I don't trust the 3 (+/-1) bozos who got picked to review a paper to be the final arbiters of whether every single piece of information is perfect, just that it passes the smell test.

    (The person best placed to catch every little detail will be the graduate student in another lab 18 months after publication who needs to understand this paper for her project. Let her have a crack at evaluating all these extra supplemental figures you are proposing. She will be the only one with any bandwidth to think about it, while the PI writes grants.)

  • Anon says:

    SidVic- actually, detection of falsification by unlikely patterns of variability (or lack thereof) can still be detected. Up to DMs point, this would raise the barrier and "detect-ability" of image manipulation but less so if for review only.

  • Grumble says:

    So, DM, is this picking of image #5 the practice that you follow, or that you just preach?

  • Neuropop says:

    My old postdoc advisor had an easy solution to this. If the effect was not visible in raw data in essentially every replicate, then it wasn't real. So no cherry-picking. When time came to report "representative examples", I would pick any old one since they all showed the effect.

  • qaz says:

    l have always worked under the assumption that the purpose of "representational" data is to "get a feel for results" but the REAL results are always quantitative analyses of all the data. Having representational data is particularly important when the analyses are really complicated or depend on a lot of preprocessing. Representational data helps to tell your story, not prove your point.

    At that point, using the "best" or "clearest" example is the right thing to do.

  • Dave says:

    WB haterz

  • NewPI- stunned says:

    Don't forget that oftentimes we hope our readers will remember something from the manuscript for some amount of time (more than minutes, less than years). I am FAR more likely to remember an image than the quantitation of it.

    Show the effect in a picture. Exemplary, representative, whatever. Also, quantify the effect with an indication of reproducibility/error. Present both. Your reader will (should) use the quant data to decide if they believe the effect, but the image will allow for a memory of the work.

  • drugmonkey says:

    That attitude is disturbing NewPI

  • drugmonkey says:

    You too qaz

  • physioprof says:

    When I have shown "representative" examples in papers, I have picked examples whose quantification has been closest to the mean (or median, as appropriate) of the group.

  • Rheophile says:

    Using the "best" image seems unwise even if you report quantifications. This is especially true since quantifying image data is usually subject to fifty little decisions along the way, which are not usually well-reported in methods sections.

    Speaking as someone who frequently models biological data, I would love if authors were more careful on both showing their uglier images and quantification. For instance, I was recently looking at a paper where the central claim is quantified, but important secondary conclusions are justified with a single "representative" image. Based on this discussion, I guess I know which parts to trust!

  • Dave says:

    If our WBs are not consistent to the point where we have to pick and choose, we have bigger problems. I would happily publish all films, uncropped and unedited. But let's be honest: nobody reallllly cares about the quality of the work, just where it is published.

  • qaz says:

    Why is it disturbing? The data is in the quantification.

    I would argue that trying to get anything scientific from "representative data" is meaningless since you can't say anything about the variability (since you are effectively presenting an n of 1). So you can't know if your average data example is far from the null hypothesis or not. So "representative data" is only useful for communication of ideas, not quantifiable science.

    Now, if you want to argue that we should have infinite space in supplemental material and we should therefore show all the data, ok. But that's a different statement.

    So what is the real purpose of representative data? In my field, we are on the edge of detection. On a really good day, you might get enough clean data that you can see an effect clearly. But on a typical day, it is hidden in the noise on a trial-by-trial, session-by-session, measure-by-measure level. This means that we have to do a lot of complicated data analysis to determine if the effect is real under the noise Importantly, complicated does not mean wrong! In my field, representative data is not so much "representative of the kind of data we see" as it is "representative of the effect that we think we are trying to measure. Representative data is like a methods section - "this is what we think we are seeing when we do all the averaging". I have found that people often use the "best/representative" data to understand what we are trying to measure with the quantification because in my field the quantification can get very complicated. But the real data has to be in the quantification. It has to be.

  • Eli Rabett says:

    If you want the reviewers to see all the data, put it in dropbox or equivalent and send the editor the access code for those files. If you really want to be open about it, put the access code into the references.

    With funders increasingly insisting on open data, this is coming anyhow so the issue is how to maintain confidentiality before publication.

  • drugmonkey says:

    If your data are like that qaz then you should never have any so called representative images that you publish. Just show the quantification. How is the one huge outlier helping?

  • NewPI- stunned says:

    I assume the disturbing part for DM is that I said I want the data to be memorable, not that I said present both an image and the quantitation and expect the quant data to drive the science.

    I am not for confusing the reader or obscuring/overselling a result. I am suggesting that good scientific communication should not be ignorant of what is memorable; eschew the psychology of memory and communication only if you do not wish to be clear.

    I am not now, nor will I ever support misrepresenting the data.

  • Draino says:

    Representative data is where the gestalt happens. You, the author, mean to present A. But I, the reader, see B C and D in the parts of your image you neglected to crop.

    This bonus never happens when you, the author, bias what you want to measure and show me a bar graph covered with asterisks and other significance talismans.

  • qaz says:

    Because the outlier is what is actually going on. It's because we are at the edge of detection. So we can show quantification to prove the effect is real, but hte best way to see the effect is in the outlier where all our detection equipment finally worked.

    What we've found is that our quantifications are too complicated to be understood without explanation. The example data is part of that explanation.

    In practice, the examples people choose are the ones where the data collection (as measured in other ways) worked particularly well, which are the ones we would expect to show the effect most clearly. Basically, it's a sampling problem. If we are only sampling from a small part, but the daily sample size changes, then the day with the largest, clearest sample is going to show the effect most clearly, even if it is an outlier.

  • drugmonkey says:

    Right. So showing that one is lying.

  • jmz4 says:

    I generally take "representative" to mean: showing a data point closest to the median effect size for the variable being measured (e.g. intensity or co-localization). The point is to show visually what your quantification (usually an average) is representing. This is easily done for most parameters, and I think it is misleading to call it "representative" if you're not doing this. Though I've gotten loads of push back from PIs about this standard.

  • qaz says:

    DM - I fail to see why this procedure is "lying." We are seeing effect X. Effect X is proved to be true because of quantification of all of the data (and often tested on each animal separately to prove that it is robust to outliers). Effect X occurs because of the fact that there are millions of cells doing something, from which we are sampling a few hundred if we are really really good and really really lucky. Effect X is clearer the more cells you have. So why is it lying to show the data set with the most cells that shows effect X most clearly?

    I will also point out that in every case in which people have done this (particularly the point about basing decisions about reality on the quantification, and only using "example data" to explain what effect X is), when the technology catches up in a decade or two, everyone can reliably see data that looks like our best data. So the system works.

  • drugmonkey says:

    jmz4- pushback in which direction? What are your mentors telling you to do?

  • jmz4 says:

    They want the prettiest, cleanest and most striking image. Mostly I end up settling for the 50-75% range, though on one occassion we went with one that wasn't representative at all and I even had to fight to get the quantification included next to them (PI wanted to stick it in supplement).

  • dsks says:

    I agree with the "representative" of the mean quantified data. That said, I think there should be a Journal of Beautiful Datums that May or May Not Be Remotely Representative of Anything In Particular. Because I've seen some really lovely stuff out there. Aesthetically right on the money, even if it was just a fluke/undergrad mishap. That sorta stuff should be recognized, imho.

  • drugmonkey says:

    qaz- Do you clearly label your single exemplar as "the best, most optimistic one out of all that go into the quantification?". Or "artist's rendition" or similar?

    Because I guess I could be okay with that as long as there is full disclosure that it is not 'representative' by any reasonable definition of that word.

  • DJMH says:

    As you should know, DM, since you're always talking about how science-in-one-area isn't the same as science-in-another-area, there are also different expectations for what the "representative" data show. You know those expectations because you're in that field of science.

    So in my subfield, most of my data could be plucked at random and would show the same effect as the mean, but there still might be reasons that only 1 of the 10 experiments gave me a good "representative" trace--e.g., the experiment reports two parameters but in several cells, one parameter changes according to the median but another parameter changes in a less typical fashion; or recording quality is poor (while still acceptable) in some other cells. So yes, I show the most prettiest traces I can while still trying to hew as close as I can to the median result (and when feasible, I even highlight which cell was the "representative" example on the scatter plot of results.)

    What qaz is doing sounds weird to me, but I can't assume that that's not kosher in his subfield. Actually I can't even assume qaz is a he.

  • DJMH says:

    Though I will say I'd like an example of qaz's "when the technology catches up in a decade." Sometimes when technology catches up, it doesn't support earlier findings.

  • I generally take "representative" to mean: showing a data point closest to the median effect size for the variable being measured (e.g. intensity or co-localization). The point is to show visually what your quantification (usually an average) is representing. This is easily done for most parameters, and I think it is misleading to call it "representative" if you're not doing this. Though I've gotten loads of push back from PIs about this standard.

    This is exactly what I already said. "Representative" means "quantitatively representative of the aggregate group results: mean or median".

  • drugmonkey says:

    Can't believe this hasn't come up yet.

    One corrosive effect of showing the best-possible image to be "representative" is that it drives a feed-forward that data "should" look like that. So PIs (often to satisfy the deamnds of reviewers, editors perhaps?) send the poor trainee back to run it over and over again until "the data look pretty enough for publication".

    This seems like a driver of falsification to me.

  • A Salty Scientist says:

    Another vote that representative should be mean/median. Pointed this out to my trainee, who suggested we should show representative images in the body of a manuscript and all of the rest in the supplement. I think that's a good idea.

  • qaz says:

    DM - Saying "we're going to take the one day when we had enough cells to show the effect clearly" is NOT at all "go back and make the data pretty".

    One should never ever ever say "go back and make the data pretty". That's just askin' for trouble. But that is a completely different issue. That's like saying you should never do statistics because some PI might say "You don't have significance yet, go run more samples." (Which, I assume everyone knows is not kosher.)

    In my field, technology has been moving forward reliably over the last decades, increasing the levels of detection. What we've found is that labs who are careful about their data analysis can report an effect on the edge of detection and then other labs reliably replicate those findings as technology improves (think resolution changes as fMRI goes from 1.5 to 3 to 7 to 10 tesla or how many cells you can record at one time in vivo as the number of implantable electrode increases).

  • Eli Rabett says:

    What is wrong with

    "This is the clearest example to demonstrate the effect but quantification upholds the hypothesis"

    Oh wait. . . .

  • jojo says:

    I have a question about this for all you people that use microscopy to "quantify" effects.

    Generally I think it's great that people are actually quantifying data from all images and trying to find an image that is actually representative. Yay.

    My question is whether there are actually images taken of "everything"? And if not who and how is it decided what images are taken (and which are not) such that they ultimately go into the quantification pool?

  • jmz4 says:

    "One corrosive effect of showing the best-possible image to be "representative" is that it drives a feed-forward that data "should" look like that. "
    -Yes, one need only compare similar experiments w/ representative images from the same lab in glam vs non-glam journals to see that this is true. For the nature paper you run that Western till it looks perfect. Wastimg everyone's time.. Jbc you juat pop it in there.

  • drugmonkey says:

    jmz4- You can also compare the replacement for a "mistaken" figure/panel in your average Glam erratum with the stuff included in the original paper.

    jojo- isn't this what the Methods section is for? describing this *accurately*?

    What we've found is that labs who are careful about their data analysis can report an effect on the edge of detection

    So what? If it is an effect that you can only pull out so that it looks like something different via inferential stats or via mean/variance presentation then you don't show an allegedly representative image. This is not hard.

  • qaz says:

    I fail to see the harm in a "best" image, given that you have accepted that we have a real effect. Particularly if that image gets our point across more clearly.

  • jmz4 says:

    @CPP yup, just chiming in my 2 cents.

    I think most people would say showing the image is fine, qaz, just as long as you don't call it "representative", which has a more specific meaning to a lot of us (eg CPP, me and salty).

  • physioprof says:

    Sounds to me like what qaz is talking about would be more appropriately referred to as an "illustrative image", rather than a "representative image".

  • qaz says:

    CPP - Sure.

    To be fair, I fail to understand the point of the "representative image" since it doesn't show any variance, and doesn't communicate anything beyond an "illustration" of the point. For those of you who like to show the mean/median/mode image, what advantage does it provide to your paper?

  • drugmonkey says:

    Honesty. As opposed to misleading or lying.

  • jojo says:

    "jojo- isn't this what the Methods section is for? describing this *accurately*?"

    I have no idea since I don't do this sort of stuff myself so I was asking an honest question.

  • drugmonkey says:

    It's what the methods section is for.

  • qaz says:

    Why is showing the mean image "honest"? It shows absolutely nothing about variance. A mean image that is 0.01 units away from zero when you record with an accuracy of +/- 0.00001 units is very different from zero but a mean image that is 100 units away from zero when you record with an accuracy of +/- 200 units is not. However, the image that is 100 units away from zero will likely look very different if you are just showing the mean image.

    I would argue that without statements about accuracy and precision, claiming that the "mean image" is useful in any way at all is dishonest.

  • drugmonkey says:

    Sure thing man. Keep dancing.

  • Geo says:

    Always put your best foot forward and show your cleanest / best data. Quality counts. If you can come up with high quality data, you prove your expertise.

  • drugmonkey says:

    So wrong.

Leave a Reply