Maria Brumm has a very nice post up at Green Gabbro discussing some of the journalistic obligations that bloggers inherit when they discuss the primary scientific literature. I want to amplify a little on what she said about this, and then go off on a tangent about the distinction inherent in the conduct of experimental science itself between being "correct" and being "interesting".
I agree wholeheartedly with Maria that bloggers who adopt the journalistic voice of factual reporting are obligated to make a reasonable effort at getting their facts correct, and to correct themselves when their ostensibly factual reporting turns out to be incorrect or misleading. In fact, I have taken other ScienceBloggers to task for failures to fulfill this obligation (Hi, Greg!).
On the other hand, of course, expressions of opinion are not subject to this same obligation. And I believe strongly that there is a higher-level obligation for bloggers to clearly distinguish between their factual-reporting voice and their expressing-opinion voice, so as not to mislead their readers. (Among their grotesque stomach-churning Framers-turning-in-their-graves despicable fuckwittitude over the last decade or so, the mainstream media have egregiously failed to even come close to satisfying this obligation, although that is a topic for another day.)
To amplify on something else Maria said, about the distinction between bloggers being "correct" and being "interesting", this also applies to the conduct of science itself.
It is essential that one's experiments be "correct" in the sense that performing the same experiment in the same way leads to the same result no matter when the experiment is performed or who performs it. In other words, the data need to be valid.
But it is not at all important that one's interpretation of the data--from the standpoint of posing a hypothesis that is consistent with the data--turns out to be correct or not. All that matters is that the hypothesis that is posed be "interesting", in the sense of pointing the way to further illuminating experiments.
I spend a lot of time with my trainees on this distinction, because some of them tend to be so afraid of being "wrong" in their interpretations that they effectively refuse to interpret their data at all, and their hypotheses are nothing more than restatements of the data themselves. This makes it easy to be "correct", but impossible to think creatively about where to go next.
Some tend in the opposite direction, going on flights of fancy that are so unmoored from the data as to result in hypotheses that are also useless in leading to further experiments with a reasonable likelihood of yielding interpretable results.
As an aside, it is absolutely pathetic to see scientists who are emotionally invested in their hypotheses being "proved correct", instead of treating them as tools whose utility is in leading to further interesting experiments. This results in the embarrassing spectacle of a laboratory posing an interesting, somewhat speculative, hypothesis that leads clearly to definitive experiments that could rule out the hypothesis, but then never performing those experiments out of fear that their hypothesis will, indeed, be ruled out.
Sometimes scientists dance around their hypothesis like this for years, even decades, never doing the definitive experiments. And other people start to talk: "Maybe they really did the experiment, and didn't like the answer, so they are suppressing it." It is sad to see scientists get seduced into this kind of pernicious delusion and end up the subject of whispered derisive chatter.
Do not fall in love with your hypotheses. Ruling out your hypothesis with a definitive experiment is good!
(I hope that Janet will weigh in on this, as I'm sure she has some interesting thoughts on the topic. And maybe what I'm saying is demented fucking wackaloonery, in which case you'll get to see PhysioProf get smacked around!)