This originally went up August 13, 2007.
Science has published an elegant posthumous article by Daniel E. Koshland, Jr. entitled The Cha-Cha-Cha Theory of Scientific Discovery ... representing the 3 categories of discovery: Charge, Challenge, and Chance. In brief:
"'Charge' discoveries solve problems that are quite obvious ... 'Challenge' discoveries are a response to an accumulation of facts or concepts that are unexplained by or incongruous with scientific theories of the time ... 'Chance' discoveries are those that are often called serendipitous and which Louis Pasteur felt favored 'the prepared mind.'"
I want to go a little beyond writedit's point so I'll quote more extensively from the article
"Charge" discoveries solve problems that are quite obvious--cure heart disease, understand the movement of stars in the sky--but in which the way to solve the problem is not so clear. In these, the scientist is called on, as Nobel laureate Albert Szent-Györgyi put it, "to see what everyone else has seen and think what no one else has thought before." Thus, the movement of stars in the sky and the fall of an apple from a tree were apparent to everyone, but Isaac Newton came up with the concept of gravity to explain it all in one great theory.
This is the wheelhouse of the NIH grant review game. Most applications identify problems that are understandable and have obvious importance. Then the applicants proceed to attempt to convince reviewers that they have a new brilliant way to solve the problem which is practically infallible. So far, so good, although we might debate the merits of needing a "practically infallible" approach to receive a good score.
"Challenge" discoveries are a response to an accumulation of facts or concepts that are unexplained by or incongruous with scientific theories of the time. The discoverer perceives that a new concept or a new theory is required to pull all the phenomena into one coherent whole. Sometimes the discoverer sees the anomalies and also provides the solution. Sometimes many people perceive the anomalies, but they wait for the discoverer to provide a new concept. Those individuals, whom we might call "uncoverers," contribute greatly to science, but it is the individual who proposes the idea explaining all of the anomalies who deserves to be called a discoverer.
This one is a little tricker to grasp, the author identifies Watson and Crick's "base pairing in the DNA double helix" as one exemplar. These applications don't tend to do so well in grant review. Mostly because the applicants propose critical experiments to "pull the phenomena into one coherent whole" and the skeptics go mad. First the theoretical position is attacked. Second, the assumption that the critical experiments can pull phenomena into a coherent whole is attacked. Finally, the reviewers come up with endless different experiments that he or she believes will be better than the ones listed. So the "Challenge" grant tends to suffer. I think a related point relevant to grant review is that most times we only find out in the doing. Empirical solutions to theoretical problems. Too much of the time grant review focuses on predicting empirical outcome ("not the right experiments to prove the point", "theory not right") rather than deciding first if the phenomena are important, if the application is a good approach to an empirical solution (rather than a guarantee), whether the PI can conduct a reasonable empirical program (i.e., flexible changes based on outcome), etc.
"Chance" discoveries are those that are often called serendipitous and which Louis Pasteur felt favored "the prepared mind." In this category are the instances of a chance event that the ready mind recognizes as important and then explains to other scientists. This category not only would include Pasteur's discovery of optical activity (D and L isomers), but also W. C. Roentgen's x-rays and Roy Plunkett's Teflon. These scientists saw what no one else had seen or reported and were able to realize its importance.
The NIH grant process does not do at all well with this. The main problem is the obsession with "hypothesis testing" and the universal critique called "it's a fishing expedition". True, science needs hypothesis testing and much effort can be wasted if a plan is not focused. But somehow along the way scientific culture has forgotten that all science starts with observation. Did I emphasize that enough? OBSERVATION! As in "Hey, here's something cool about the natural world. Let me see if I can figure out if it is really true, how it works and what that might mean for understanding other cool stuff". Really, isn't this a sufficient description of the "scientific process"? 😛 So sometimes an application proposes some "let's just kinda see what happens" experiments which draws the ire of reviewers. "Where's the hypothesis". "It's a fishing expedition" they cry. I understand the point. I certainly see applications in which the prior published work of the PI and/or the sub-field suggests that such criticism is warranted and necessary. Ones in which "let's just see what happens" seems to be the entire research program. However I would submit that when the track record suggests that the PI knows how to test hypotheses then perhaps a little credit should be extended for the one or two experiments that seem like discovery efforts.
Finally, I note that the author fits the Ten Commandments, the Magna Carta and the Bill of Rights into the Cha, Cha, Cha framework. So it must be a valid heuristic...