Descriptive and "Hypothesis Testing" are just in the eyes of the beholder

From an ASM Forum bit by Casadevall and Fang:

An example of a rejected descriptive manuscript would be a survey of changes in gene expression or cytokine production under a given condition. These manuscripts usually fare poorly in the review process and are assigned low priority on the grounds that they are merely descriptive; some journals categorically reject such manuscripts (B. Bassler, S. Bell, A. Cowman, B. Goldman, D. Holden, V. Miller, T. Pugsley, and B. Simons, Mol. Microbiol. 52: 311–312, 2004). Although survey studies may have some value, their value is greatly enhanced when the data lead to a hypothesis-driven experiment. For example, consider a cytokine expression study in which an increase in a specific inflammatory mediator is inferred to be important because its expression changes during infection. Such an inference cannot be made on correlation alone, since correlation does not necessarily imply a causal relationship. The study might be labeled “descriptive” and assigned low priority. On the other hand, imagine the same study in which the investigators use the initial data to perform a specific experiment to establish that blocking the cytokine has a certain effect while increasing expression of the cytokine has the opposite effect. By manipulating the system, the investigators transform their study from merely descriptive to hypothesis driven. Hence, the problem is not that the study is descriptive per se but rather that there is a preference for studies that provide novel mechanistic insights.

But how do you choose to block the cytokine? Pharmacologically? With gene manipulations? Which cells are generating those cytokines and how do you know that? Huh? Are there other players that regulate the cytokine expression? Wait, have you done the structure of the cytokine interacting with its target?

The point is that there is always some other experiment that really, truly explains the "mechanism". Always.

Suppose some species of laboratory animal (or humans!) are differentially affected by the infection and we happen to know something about differences in that "mediator" between species. Is this getting at "mechanism" or merely descriptive? How about if we modify the relevant infectious microbe? Are we testing other mechanisms of action...or just further describing the phenomenon?

This is why people who natter on with great confidence that they are the arbiters of what is "merely descriptive" and what is "mechanistic" are full of stuff and nonsense. And why they are the very idiots who compliment the Emperor on his fine new Nature publication clothes.

They need to be sent to remedial philosophy of science coursework.

The authors end with:

Descriptive observations play a vital role in scientific progress, particularly during the initial explorations made possible by technological breakthroughs. At its best, descriptive research can illuminate novel phenomena or give rise to novel hypotheses that can in turn be examined by hypothesis-driven research. However, descriptive research by itself is seldom conclusive. Thus, descriptive and hypothesis-driven research should be seen as complementary and iterative (D. B. Kell and S. G. Oliver, Bioessays 26:99–105, 2004). Observation, description, and the formulation and testing of novel hypotheses are all essential to scientific progress. The value of combining these elements is almost indescribable.

They almost get it. I completely agree with the "complementary and iterative" part as this is the very essence of the "on the shoulders of giants" part of scientific advance. However, what they are implying here is that the combining of elements has to be in the same paper, certainly for the journal Infection and Immunity. This is where they go badly wrong.

27 responses so far

  • Dr Becca says:

    A hypothesis is merely something that's testable, no? Using their example: "My hypothesis is that gene expression and/or cytokine production will change in response to a given condition."

    Either something happens, or nothing happens. As long as there's more than one possible outcome and you have a control group, it's a testable hypothesis. They seem to be confusing the level of detail at which something is investigated with whether or not something is a hypothesis.

  • drugmonkey says:

    And there's the thing. all of science is describing what one observed under X, Y and Z conditions. Period.

    There is no "mechanism" proven.

    They seem to be confusing the level of detail at which something is investigated with whether or not something is a hypothesis.

    Precisely. And this permeates many aspects of science. It is ridiculous and betrays that the person making the charge hasn't the faintest clue what science is really about. You will note that this tendency is grossly over-expressed in those who are more focused on the IF of the journal in which they publish, and their own personal rep/accomplishments, then they are on making a sustained contribution to group progress in a subfield over the long haul.

  • Grumble says:

    Dr Becca, that's a prediction, not a hypothesis. The hypothesis is something like, "condition X causes cytokine Y to increase." You formulate an experiment (do a manipulation to introduce condition X, then measure cytokine Y) and make a prediction about the outcome based on the hypothesis.

    But yeah, it seems to me you can always come up with a hypothesis to justify "merely descriptive" science. It might be too general for many crusty scientists to stomach, though.

    By the way, putting all this more firmly in neuro-land: there are a lot of electrophysiologists who do absolutely nothing but "descriptive" experiments. One can identify vast and complicated ecosystems of neural activity, and every single piece of evidence linking that activity to the biological process of interest (perception, cognition, emotion, action, you-name-it) is entirely correlative (i.e., "merely descriptive"). Yet there are many such papers in Naturescience.

  • eeke says:

    Any kind of genome-wide analysis could also be considered descriptive. These days, no one calls it "descriptive" anymore. The correct term is "exploratory". We see this stuff all the time in the glamour mags.

  • hn says:

    Those of us who work in structural biology or imaging do work that is essentially descriptive. Yes, we want to answer specific questions, but we are really driven by the question, "What does this look like"? Our peers understand this us, but it can be a tough sell to the hypothesis driven idelogy.

  • Dave says:


    I pay no attention to old dirt bags who go on about "exploratory" or "descriptive" methods as if it is a bad thing. This thinking is outdated and is usually based on a complete misunderstanding of the techniques being used and why.

  • DrugMonkey says:

    How are your grant reviews going 🙂

  • Dave says:

    Well considering my SEP contained old-timers who don't hold current NIH funding, and that my grants contains a ton of RNA-Seq and ChIP-Seq, I'm happy with my initial score and I may even bag the grant first time round. My use of NGS techniques has always gone over well in reviews, especially when in manuscripts.

  • leigh says:

    we all know we like to write up shit that sounds fancier than our initial "dude so what if we did X?" roundtable in the lab...

  • Isis the Scientist says:

    "Suppose some species of laboratory animal (or humans!) ..."

    Humans are animals. Duh. Everybody knows that.

  • Spiny Norman says:

    Well said. You are making me have a crush on you today, DM.

  • Spiny Norman says:

    Hypotheses are basically good for one thing and one thing only: testing to destruction.

    By destroying hypotheses with well-designed tests, we may constrain the range of possible working models that remain to explain how something works.

  • Spiny Norman says:

    DM: "And there's the thing. all of science is describing what one observed under X, Y and Z conditions. Period."

    Here I have to disagree. The intent of science is, at least in part, to derive models that sufficiently powerful and precise that we may reliably predict phenomena not yet described, or the properties of devices or things not yet analyzed or, in some cases, constructed.

    The way we generally do that is through iterative cycles of observation, model building and hypothesis testing -- the scientific method.

  • Spiny Norman says:

    To amplify a bit further, DM's assertion is essentially a statement of logical positivism. LP was long ago shown to be inadequate as either a methodology for scientific progress or as a description of how scientific progress occurs.

  • DrugMonkey says:

    Biological models suck and the modeling of behaviorism bored the ever living daylights out of me. Screw predictive models. Generate some damn data.

  • Spiny Norman says:

    Fucking stamp collector.

  • Spiny Norman says:

    Crush didn't last long. 😛

  • Jim Thomerson says:

    When I describe a new species, I am implying the hypotheses that it is a separate species, and that it is new. Both these hypotheses can be tested and either supported or falsified. My hope is that someone will care enough to do so.

  • Ferric Fang says:

    Thanks for the thoughtful discussion. I don't believe we are actually in significant disagreement. You might also be interested in a companion essay called "Mechanistic Science" (Infect Immun 77:3517-9, 2009), in which Arturo Casadevall and I make a number of the points raised here in a slightly different way. The principal conclusion of these essays is that the terms "descriptive" and "mechanistic" are relative, imprecise, and frequently misused in a pejorative way that fails to appreciate the value of different types of experiment. As far as whether Infection and Immunity is "badly wrong" in favoring papers that provide explanations or insights over those that simply present large datasets, I would simply say that we can't publish everything. We are reflecting the preferences of our editors, reviewers and readers in trying to choose the papers they find most interesting. Large datasets without a specific hypothesis can certainly still be useful and may well deserve to be published, but perhaps in another journal with different priorities. The bottom line is that categorizing science as "descriptive" or "mechanistic" or "exploratory" or "hypothesis-driven" is probably not all that useful. As Richard Strauss is said to have remarked about music, "I know only two kinds. . . good and bad."

  • DrugMonkey says:

    Thanks for stopping by FF. I agree that I am on board with most of your dissection but I disagree wholeheartedly with the notion that one should use such arbitrary motions of "mechanism" to screen for any journal. It means there will be pertinent or hott stuff that is categorically ruled out. Perhaps worse, it sets a formula for getting *in* with stuff that may be technically broad but scientifically boring. See: optogenetics in neuroscience. Previously "phenotyping knockout mouse de jour".

  • Pinko Punko says:

    I think flexibility is always a good thing. I also think that grant proposals should try to generate a mix of stuff- and it really comes down to how the work is framed. The traditional thing that could get hammered on by the hypothesis crowd would be genetic screens. If cleverly designed and proposed, and paired with complementary approaches, there should be no problems. Currently, with the list-generating techniques, I think panels are somewhat sophisticated about it, but I think you have to say what it is you are going to do with your list besides just glory in all the things you might find, or make the list generation only a portion of the proposed experiments. Though I do admit it is ironic when someone publishes an exceptional list heavy paper and cites their NIH funding. Whether the study section is filled with old coots that are antithetical to lists or not, they are paying for it. The goal is to be as savvy about proposing list generation as possible, and make the case that getting that list of things is actually quite important. Where is our Journal of Biological Lists- this is where those papers should go.

  • Ferric Fang says:

    Just to clarify-- we do not categorically rule out any paper at Infection and Immunity as long as it falls within the scope of the journal. All submitted manuscripts are read by at least one or two editors, and most are submitted for in-depth review. The example cited above is a hypothetical example. In fact, we publish many papers that are "exploratory" or "discovery-driven" research by anyone's definition. Even a pure description of a phenomenon can be publication-worthy if the phenomenon is interesting enough. When most scientists refer to a paper as "merely descriptive," I think they really mean that it is boring-- that it doesn't seem to say anything new or important, or that it mindlessly generates a list without using the information to test an idea or to generate new hypotheses. Such papers are going to be assigned low priority by our reviewers. There are way too many new papers to sort through each year, and most people don't want to waste their time reading uninteresting ones. Today we are fortunate to have very powerful research methods that can quickly generate enormous quantities of data. We have to be careful to use these methods wisely. The passage about surveys of gene expression or cytokine production that you excerpted from our essay is absolutely not a condemnation of exploratory research. Think of it more as friendly advice to young scientists to help them avoid being shot down by the "descriptive" epithet.

    Keep up the great work with your blog.

  • ecologist says:

    There's another aspect that generally gets ignored by those that insist that hypothesis-testing studies are the only way to go. In fact, much of the most important kind of science isn't description or hypothesis testing; it's parameter estimation. I don't just want to know if X has an effect, I want to know how big the effect is. At what rate does it operate? is it linear or nonlinear? does it interact with Y and Z? and so on. When I have that kind of information, I'm getting closer to understanding how the system operates, and usually finding myself with a whole additional set of ideas to investigate.

  • dr24hours says:

    I had a lecture at U Mich this summer wherein the lecturer categorically opposed statistical significance testing.

  • Spiny Norman says:

    ...and it's great to see FF here.

    He probably doesn't remember (or know who I am under the cranky pseudonym) but he once provided life-changing assistance when Spiny was a baby scientist. The epitome of a physician-scientist, Dr. Fang is one of my heroes.

  • Lady Day says:

    '"Research" is such a restrictive term. I feel I've opened up a whole new arena of experimentation.... '

Leave a Reply