Striking a balance between negative and positive research directions

The trigger for this post was, I think, some discussion or other of Naomi Oreskes' book "Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming" [Amazon]. There's a video of the author up at Deltoid blog for those that are interested. The major thesis of the book appears to be a discussion of how a small group of scientists applied tactics of sowing doubt about scientific consensus to several socio-political topics. As you know, I don't handle the denialism thing around here. But what I got to thinking about was whether something related to "sowing doubt about scientific consensus" plays a role in our normal daily science lives, whether this role differs by scientific personality and how we decide what balance to strike in our own investigations.
When it comes to your publications and scientific directions, are you a science critic? Or do you devote your energies to finding new things that [might be] true and could not care any less about falsifying or criticizing the consensus of PubMed?


This is one of those dichotomies that is flawed but might service for didactic purposes so let's try to keep away from the trite dismissal, eh?
To expand my point a little bit, I see the negative or questioning type of research as being that which is reacting to an existing body of research. I don't mean mere commentary in the Introduction to a paper or even a review article. I mean experimental themes or tracks within your own research program that are motivated by a fundamental critique of other papers. Particularly when there is something approximating a consensus view, whether that be because multiple labs agree or because only a single dominant lab has bothered to work on the topic.
In contrast, much of what we investigate is essentially positive. It may build on prior work but it basically assumes that that prior work is more or less valid. The major goal is to discover new things, not to correct things that we think we already know that are false.
As you might expect, I see a role for both of these kinds of approaches to science. You might also suspect, quite correctly, that I have a strong tendency for the negative approach. It has almost constantly been a part of my research plans to react to a bit of literature that doesn't make sense to me and see if I can figure out a truthier truth. There have been times when I have actually been advised* by some BigCheez that being so "negative" was a, well, let us just say a bad idea. And make no mistake, it IS a bad idea to go around criticizing existing accepted findings.
First of all, you are going to be wrong or only trivially correct quite a bit of the time. It is much harder, of course, to publish me-too or negative data in that scenario versus an area in which there is no consensus or minimal publications. Second, you are up against some political factors which might make it quite difficult (and costly) to publish things that are viewed as corrections of a BigCheez' papers even if you are right in your initial skepticism. Third, if you want to take it to the programmatic level of investigation it may be difficult to acquire research funds. Arguing "that thing you think you know is actually quite wrong" is far more difficult than arguing "we know nothing about topic X" when it comes to getting grant reviewers on board with your proposals.
I think I've learned, over the years, to strike a balance. At least, that's how it appears in retrospect. I can't say that I've explicitly thought about this balance of negative vs. positive research directions until today.
__
*strongly

5 responses so far

  • zoubl says:

    I thought the point of developing a research program is to go after knowledge gaps. To focus strictly on the negative approach is counter-productive. If your data or interpretation of the data contrasts with the view of the field, I think it's necessary to come up with logical explanations for both views.

  • Beaker says:

    I see this topic as a version of the old "lumpers vs splitters" demarkation about approaches to research. I happen to be a lumper, but I appreciate there are lots of splitters doing good science (you can find some good stuff in their papers if you take the time to sift through all that data). Just as long as those pesky splitters steer clear of my beautiful global theories, which explain everything from 10,000 feet above.

  • Sunflower says:

    If you're going to pick a fight with consensus, you'd better have a reason. Someone Is Wrong On Pubmed...not a reason. Or at least not worth it.
    But if your data are dragging you into the fight - not ideology, or self-preservation, or general contrariness - then you might be on to something. Having real data means you've got a system in which the wrongness, whatever it is, can be demonstrated. And as mentioned, having a positive research agenda makes it easier to sell your work to outsiders.
    Also, there's a difference between testing some assumption that people take for granted and wading into an established battle zone. The former may get ugly, if you push your luck, but there could be a real upside. The latter is a guarantee of ugly.

  • bsci says:

    I think there's a fine line between negative research and probing the null hypothesis. Finding gaps in the best models of the day is positive research, but can appear negative at times.
    Also, having not read the book in question, I think there's also a difference between doing negative research for the sake of weakening consensus (and policy relating to that topic) and doing negative research but respecting and understanding the reams of data that made the consensus idea the consensus in the first place. It's sort of like the climate researchers or criticize something in a model, but then find out they're being quote mined by anti-global warming people. When that happens, it must make for a really lousy research environment.

  • ex-hedgehog freak says:

    I support everyone's comments about the fine line here. While we all agree that there is value in designing an experiment to address what may NOT be the answer to any scientific question, it doesn't yield as much value to you as an investigator, and certainly not from a publication standpoint.
    What is a more valuable experimental direction, in my opinion, is the design of an experiment that tells you something new, regardless of which direction that answer may go (ie, does it support my hypothesis or not). I agree these may not always be easy to do. But thinking from the perspective of grad students/postdocs, looking to try and get these publications out which you have addressed in some of your other posts, it is absolutely essential to their professional advancement as scientists that they are mentored in experimental design that yields useful answers, not necessarily 'another dead end'.
    Although this goes against the notion of pure science, which is to work to bolster support for a hypothesis by attempting to provide evidence against it, this is the reality of the research world in which we live.

Leave a Reply