Pattern Recognition

Feb 10 2009 Published by under Conduct of Science, Ethics, Scientific Misconduct

The Common Man has been posting on MLB's cheater-pants-o-the-week story and his effort is fanning a slow flame in Your Humble Narrator which was started by PhysioProf. You will recall that PP's post dealt with a seemingly run-of-the-mill paper retraction story in which the authors admitted that there were enough deficiencies in the published work that the only choice was to retract the entire article.
As the story develops I am becoming convinced that there is more to this little scenario than one random postdoc faking the odd control.


True enough, I am cynical when it comes to issues of scientific fraud, I am not some wide eyed naif here saying "OMG! Can you believe it???". It must also be admitted that the type of science that always seems to get busted for data faking (whether it be fake subject entries in human clinical trials or bench lab shenanigans with blots and gels) is unlikely to really affect me. My areas of scientific interest are relatively cheat exposed-scandal-retraction free (save that one little incident). So why would I be getting all red about this?
Well, remember my post on people failing to get their grants and facing lab closure? Do you hear the anguish of those who just want a chance to play and feel like that chance will never come?
Well, scientific cheaters profitand the fraud is by no means a victimless crime (even when we are not talking clinical research and the findings will never cause a mistake that hurts patients).
Recall the Linda Buck retraction and the hapless blamed postdoc, Z. Zou who it turned out had managed to win an Assistant Professor position? [Unfortunately dude was caught up in the mass layoffs following Hurrican Ike, see comment thread here.] Did you read that coverage writedit had of a fraudster who perpetuated his data fakery for a very long time indeed, moving from grad school to an independent posting?
Getting back to the case at hand, I'm not even sure who we should be looking at but something smells like more rain's a-coming. One commenter alluded to a discussion on "another bbs" which I can but assume is this one, which had been pounding this blog with referrals. That thread seems to be focused on Wen-Ming Chu, previously a postdoc in the Michael Karin (Research Crossroads Grant Profile) lab and now with his own appointment at Brown University (Research Crossroads Grant Profile).
The comment thread after PP's post was sprinkled with a few specific accusations leads for anyone to go and check for themselves whether the Karin lab or Professor Chu or random assorted trainees are the source of the stench. I was particularly struck by this comment:

Did you guys notice that in the previous issue of Cell, MK published an erratum for the Budanov and Karin paper for similar shady image processing irregularities. Why does the Budanov and Karin paper get to be corrected but the other paper retracted?

Let me note first that most of the comments have been spot on in tone and strategy. It is fantastic to refer us to the specific figures that one feels are fishy. Let those who are familiar with like data make the read for themselves. I like. Way better than just saying "Everyone in the field thinks they are fakers".
But here's the thing. As evidence of chronic data fakery from any one lab or any one scientist starts to accumulate we must start to view their grants and even faculty appointments as having been won unfairly. You can say "Oh, it is just one (or three) retractions" if you like. Sure. But how many people without jobs have been hearing PhysioProf's pronouncements about what it takes to land the very top jobs? Have read my various carping bits about how CNS publications are competitive> You might also think about the very close and sometime arbitrary calls about grant scoring and wonder how that one extra CNS paper might tip the balance in someone's favor.
It is not ridiculous to remember that every CNS pub which gets accepted undeserving takes the place of another deserving work. Every new hire who gets the offer on the strength of that cool finding or that extra CNS paper takes the place of someone else. Most obviously, every grant that gets awarded to a data faker is one more grant not going to the righteous. It should be obvious that there is no way to fix these shaftings years after the fact when the frauds come to light.
We need to start fixing this problem.
The first school of thought is that we need to do more in terms of ethical education of trainees. You know, that one day snore-fest with the laughably obvious "scenarios" that you had to endure as a first or second year postdoc? What a crock. Don't get me wrong, I think we should have those basic introductions but we, as a business, need to do much, much more.
The rationale behind the hypothesis that more training will fix the problem is that people somehow do not know what the expectations are with respect to the conduct of science. That individuals are somehow under the illusion that it is okay to simply make up data.
C'mon. That is nonsense.
What is much more likely is that there are systematic features of the environment that encourage some individuals to start down the slippery slope. The thing that I am heating up about is that I think it is high time that some data were collected. Now, as always, the go-to on this discussion is the Nath et al, 2006 paper on retractions. It has some limitations, primarily what I suspect might be a too-credulous acceptance of "unintentional mistakes" but it is most certainly a start.
Here are some other things I'd like to see tracked and analyzed with respect to retractions. Funding source, right down to the NIH Institute or Center if necessary. Funding mechanism (Big Mech versus R01). The journals in which it was published. The local Universities or Institutes. Age and gender and ethnicity of the PI. Scientific domain. Etc.
Soup. to. Nuts.
The reason is that it is all too easy to dismiss sketchy retractions and admissions of fraud on the paper by paper basis. It is harder to get a bead or clear view on systematic practices.
My hypothesis, of course, is that obsessive pursuit of the hawtest and greatest, pursuit of CNS pubs for their own sake, huge labs operating in fast moving areas where scooping is rampant, pressure is overwhelming to be first instead of best, editors who pressure for rapid return of not-yet-accepted manuscripts while demanding huge amounts of extra work, perceptions that a CNS pub is the categorical difference between being considered for jobs or not (Hi, PP!), etc all contribute systematic or systemic pressures which lead some people to the conclusion that faking a band or six is the right choice.
Ultimately it is the individual to blame* but come. on. These incidents are a product of more than just an individual with bad ethical grounding. They are a reflection of a certain dismal aspect of scientific culture.
I submit that viewing retraction, erratum and corrigendum patterns in a more comprehensive manner would help us to target the cultural problems and thereby reduce scientific cheating.
__
*but honestly. A GlamourMag submission for the lab? Reviewed and re-reviewed by the multiple authors in draft form even before being submitted? The hawtest and greatest thing that has ever happened to the juniormost one or two authors? And nobody notices these duplicated gels and blots that even someone unfamiliar such as myself can see when it is pointed out where to look? Please.
__
Update 12/12/09: The Molecular Philosophy blog has an interesting take on another fraudster, Luk van Parijs, but much is relevant to this case.


After all how is scientific misconduct different from stealing???? The guy fabricates data, publishes it in an awesome journal, gets all the fame and glory of a great researcher. Then he lands a position at MIT, gets grant money from NIH, gets paid to travel to give talks abroad, and lives a pretty comfy life off of the taxpayers' money. All this while other people waste precious time and huge sums of NIH dollars trying to replicate his nonexistent results.


Maybe, just maybe, such super high-profile, super-competitive labs put too much stress on being productive and making hawt science at the expense of integrity. Maybe van Parijs never heard that all that counts is NOT publishing in Nature, but doing honest good science.

66 responses so far

  • an anonymous reader says:

    Obvious fraud should be caught by editors and reviewers. The trick is to be bold about it if need be. Here is how I learned from an anonymous colleague: Recently, I reviewed a manuscript from a lab generally famous for imaging. Ultimately I thought the conclusions were poorly supported, but there were a few figures that looked vaguely fishy. I mentioned this in my review as a possible technical problem, since obviously one needs to be careful about accusations of fraud. But I was astonished and pleased when the editors sent the final reviews, and my anonymous co-reviewer had not only said basically what I said, but had also totally gone to town with quantitative image analysis and built a strong case that the images had been inappropriately fiddled with. It honestly had never occurred to me that such in-depth examination was appropriate. But I was thrilled because it justified my suspicions. I wish I had also had the intellectual cajones to do what my co-reviewer did. I will henceforth. So should you. The paper was not just rejected, but the editors said in fairly strong words they never wanted to see anything similar submitted again.

  • InflammationWorker says:

    Not to be a rumor monger, but from what I hear, this had been all over the Chinese Student BBS for quite some time. It would have been nice to have been more widely discussed in a more open forum. Could have saved a lot of wasted experiments.

  • Mike_F says:

    Obvious fraud can be caught by reviewers and editors, but often it is not. One way to minimise the problem might be placing strong penalties within the system to make sure that offending P.I.'s pay the price. If retractions were automatically communicated to funding agencies, and retractions due to obvious fraud resulted in suspension of funding and a refusal to review grant submissions by the responsible P.I., that might help.

  • an anonymous reader says:

    I hear ya, Mike_F. But how do you separate the honest PIs with merely sloppy oversight from PIs who are in on the shady data manipulation (or pushing for certain results 'no matter what')?
    In either case, it can be argued that the PI shares some of the blame and that grant money might be better spent elsewhere. But with regard to the punishment deserved, it seems sort of like manslaughter vs murder.

  • Mike_F says:

    "...how do you separate the honest PIs with merely sloppy oversight from PIs who are in on the shady data manipulation..."
    Common sense and examining each case individually can go a long way. How about a scale of punishments from warning/probation for a first or mild offense up to actual withdrawal of funding and refusal to review new applications for repeat or major offenses.

  • BugDoc says:

    It is extremely important that we recognize what it is about the scientific culture that pushes people beyond what they know is honest and acceptable data analysis. What worries me re post #4 is that as a PI, it seems impossible to review every lane and every reaction for possible fraud or error, as much as I wish I could do so. For example, when running anything in relatively high throughput format, i.e. 96 well plates or higher, there is no way realistically that PIs can look at all the raw data for all the projects going on in the lab. The fraud described in comment #1 can be (and thankfully was)detected after the fact by image analysis. Some kinds of fraud or carelessness can't be detected that way.

  • Mike_F says:

    Hi Bugdoc,
    As a P.I myself, I completely agree with you that it is impossible to review all the raw data. E.g. - one of the projects currently being written up in my lab' involves analyses of a few tens of thousands of peptide mass spectra. So in such cases I am a firm believer in both belt and suspenders - all central conclusions in a study should be tested by at least two independent approaches and by two independent pairs of hands if at all possible. And all this not because I am paranoid about fraud, but rather simply to catch any potential honest errors or mistakes in-house before writing up a story.

  • Becca says:

    Mike_F-
    belts and suspenders are awesome. Do try not to wear stripes and plaids though (I hate it when people say "we got RT-PCR proving the mRNA is decreased under X condition" and "we got Westerns showing the same thing at the protein level" ... except they don't show the same thing, because RT-PCRs were done with interferon priming and 4 hour stimulation, and Westerns were done without priming and 2 hour stimulation).
    It's really easy for a non-micromanaging PI (i.e., the good kind) to run into this unless intralab communication is unusually great and people know which of their work is likely to be published with who elses work in which papers before the fact (not always the case).
    *grumble grumble*
    (why, yes, I am rereading a paper with exactly this sort of problem... why, yes, I have had to repeat work that represents nothing new and publishable for me just because the PI needed a verification of this sort)

  • Markk says:

    Mike_F - Speaking as a total outsider who has worked in budgets. Your point about duplicating work. Will that fly? Budget wise I mean. If I got 3 or 4 proposals in an area that were similar, but had enough to fund only one and had 20 more areas to fund with money for 19 unless I scrounge, wouldn't I at some point just say I trust a lab or PI and go with the cheap one? As long as government or big private money is funding you and you are aggregated, unfortunately the problem is that there is little incentive to do things in duplicate. In fact wouldn't it be better to fund other people to do similar things to get even more independent results, if that is what you want?

  • Alex says:

    1) The point about training is spot on: Sitting in a training session won't do squat, unless the training session actually focuses on genuinely hard problems germane to your research area. Those sorts of training sessions are going to be hard to put together. But the ones that insult your intelligence by telling you not to fake data will at least cover the asses of administrators when somebody does it anyway.
    2) To be fair to image processors, I suspect that a person who doesn't really understand image processing could go to town with some tools and persuade himself that he's actually highlighting information rather than altering or obscuring it. Doesn't mean he should do it, but some of it is on the fine line between deliberate fraud and stupid sloppiness.

  • an anonymous reader says:

    (I hate it when people say "we got RT-PCR proving the mRNA is decreased under X condition" and "we got Westerns showing the same thing at the protein level" ... except they don't show the same thing, because RT-PCRs were done with interferon priming and 4 hour stimulation, and Westerns were done without priming and 2 hour stimulation).

    I can out-hate you on this one, Becca. Forget about the priming and crap. A more fundamental problem with that comparison is that one method measures mRNA and the other method measures protein -- two completely different things that are not necessarily correlated. A related pet peeve is when people use westerns as controls/validation for in situ staining. They are completely different experiments! The only thing in common is the antibody.
    But, of course, none of this is fraud. Just ignorance. Thus our quandary: Detecting Pi fraud versus detecting PI ignorance.

  • S. Rivlin says:

    I believe that policing fraud in science at the publishing stage is already too late, even if such fraud is suspected and a red light has been turned on.
    The truth is that, unless a researcher works alone, students, postdocs, co-investigators and collaborators, more frequently than not, know about or suspect monkey business. Likewise, more frequently than not, these parteners do not have the balls to expose a colleague for wrong doing.
    Research institutions, beside providing the absolutely important and necessary training that all students and faculty, must also offer complete immunity and anonymity to whistle blowers. Unfortunately, more often than not, whistle blowers are paying much higher price for their couragous actions than the fraudster who benefits from her own fraud.
    I strongly believe that most scientists who knowingly commit scientific misconduct are predisposed to such activity and, in many cases, have done it in the past and got away with it. Moreover, the bigger the name of the fraudster, the more difficult it is to topple her and the greater is the price the whistle blower pays.
    I have described two such fraudsters in my book, but my experience over the years with many other cases, have led me to the above belief.

  • Drugmonkey says:

    Sitting in a training session won't do squat, unless the training session actually focuses on genuinely hard problems germane to your research area.
    Even that isn't enough because the possible options are often bad, worse and unbelievable nightmare. How are those Wisconsin grad students who were profiled in Science a few years back doing, anyway?
    The bottom line for many of these cases is fear for career and the only option that comports with traditional ethical approaches is "Be willing to not ever have a career in this type of science or even any science at all. Be willing to not ever get your PhD."
    I lean toward S. Rivlin who suggests we need to protect whistleblowing. We need to figure out ways to keep such individuals from paying the career price.

  • Alex says:

    DrugMonkey-
    In the case of misconduct by others, I completely agree with you: A training session won't do enough if the only options are likely to end your career.
    I do think there are some genuinely hard questions that people confront in their own work, though. For instance, what sorts of image processing steps, or noise-reduction steps, cross the line from teasing out subtle features to actually removing or altering information? Or, are there cases where a data point should be excluded because something weird happened in the experiment? Of course, I don't know that the non-scientists in the university's "Office of Compliance and Blah Blah" are the best people to address this.
    Actually, thinking about it further, the best scientific answer to any of these questions is to get the hell away from the gray area rather than figuring out whether you're in the good part of it or the bad part of it. If you aren't sure what your image processing steps are doing, then figure it the hell out rather than running with the stuff that looks good. If weird things happened with a few samples or trials, figure the hell out what happened in those trials, try to reproduce it, and if you can't reproduce it then at least start fresh and get a new, clean data run under conditions that you have carefully controlled and can vouch for.
    So, yeah, I guess that the answer to gray areas is still obvious. But a training session covers administrative asses in case somebody does something that is obviously wrong.

  • neurolover says:

    "Sitting in a training session won't do squat, unless the training session actually focuses on genuinely hard problems germane to your research area.
    Even that isn't enough because the possible options are often bad, worse and unbelievable nightmare."
    I'll go even further: I think we have to taint the team with the misconduct performed within it. That creates a punitive incentive to produce whistle blowing (assuming that we think there's a non-zero probability that it will be discovered). And the taint has to be imputed to the PI. Sloppiness, honestly, is not an excuse. The PI is responsible for having the appropriate error checking in place. If they do not, they are creating an environment in which misconduct grows (and will be rewarded with CNS papers).
    I don't think I'm wide-eyed about the possibility of misconduct, but I'd like to think it doesn't destroy the integrity of the science (especially as science becomes more and more complex and diffuse).
    Oh, and I have a p(of misconduct) index. I haven't figured out the equation completely, but I know that the Reward (if successful) - Punishment (if failure) is in an exponent (R>0, P

  • neurolover says:

    "A training session won't do enough if the only options are likely to end your career."
    We're vastly underestimating the situations in which this becomes true. The foreign post-doc, in their 3rd year of their J1, running out of funding, and without the splashy work that might get them a job, or even the help of the PI to write up their work? They're at a decision point where one branch leads to the "end of their career" too.

  • JD says:

    In my field, part of the problem is that separating out bad study design from malice is very, very hard. All options are imperfect and strange results can arise due to bad luck or hidden variables that can't be easily measured. It's a massive problem when you can't do direct replication (is it fraud or a different population?).
    These are not easy questions.

  • jekka says:

    I realize that this is only n=2, but this is the second recent Cell retraction where the Big Name PI is the penultimate author, rather than last. I know it's common to do this in collaborations to support less-prominent PIs, but it does hint that the Big Guys are well aware of sketchy data, and the eventual fallout.
    http://cell.com/abstract/S0092-8674(08)00224-9
    Or maybe it's something in the water in San Diego.

  • S. Rivlin says:

    After almost two decades of confronting scientific misconduct, I'm still amazed how naive scientists are, believing that their peers are different from the rest of the population. The percentage of crooks among scientists is not different from that of the general population. As long as the majority of us continue to reject tougher policing of science and scientists, especially at times when resources are scarce and crooks are working overtime, nothing will change in the dark alleys of research labs.

  • an anonymous reader says:

    Several sensible people here demonstrate ways of thinking that avoid misconduct. The commonality to all these mindsets is that they focus on the goal of getting the shit right. Not publishing. Not getting grants.
    The fundamental problem, as I see it (and which has been alluded to already), is that getting your shit right is rewarded less than simply publishing like a mofo in C/N/S and bringing in craploads of grant money. If we stop rewarding the superficial aspects of science, and start focusing on ways to properly recognize and reward honest-to-God good old fashioned figurin' shit out, then we'll be way ahead with regard to misconduct too. Punishing people for being bad is too little too late. It's never gonna work as long as people are rewarded for bad stuff over good stuff.
    So:
    1) Protect whistleblowers. Better yet, as suggested above, create an environment where whistleblowers are the shiz.
    2) Reward good science over bullshit.
    Problem solved. Thank you. Good night. Send your donations care of DM.

  • Reward good science over bullshit.

    This makes perfect sense in principle. The problem is that distinguishing "good science" from "bullshit" is much too difficult and happens on a time-scale much too long to be a basis for "rewarding" scientists. This is why reward-allocating processes in science rely on heuristics like (1) who did you train with? (2) what journals are your papers in? (3) how much grant money do you have? (4) where have you been invited to talk? (5) what do your peers say about you?

  • whimple says:

    Several sensible people here demonstrate ways of thinking that avoid misconduct. The commonality to all these mindsets is that they focus on the goal of getting the shit right. Not publishing. Not getting grants.
    If you really believe this, you should spare yourself a lot of grief by leaving science right now. Success in science is measured in dollars, and that is NEVER going to change.

  • qaz says:

    Amen Anon#20! And while Our Comrade PP (#21) is of course right that distinguishing good science from bullshit is hard, I don't agree that it's as hard as he thinks. In my experience, the good scientists tend to outlive the bastards and the simple (superficial) reward-allocating processes tends to diminish in importance over the long-term (long meaning >10 years). When stuff doesn't replicate, people figure it out. Sure it takes time, but science tends to get the right answer in the end. (I guess that's the beauty of science - it's not money or power that determines correctness - it's whether you got it right or not.) While C/N/S (why is C in there? It's a limited journal, and many scientists [many neuroscientists for sure] don't read it.) While C/N/S and grant money are important to survival, many of the most important papers in a number of fields have been published in much smaller journals. One can have a good career, make good progress, and even make world-changing breakthroughs without chasing C/N/S.
    Whimple #22 - if you're basing your beliefs in science success on getting grant dollars, you need to be in a field where money actually matters (like hedge funds). Grants are only a means to an end. When they become the end, we have all lost.
    I was always taught to double and triple check your results because (at least in my field of behavioral neurophysiology) even accidental misconduct lives with you forever. There are some labs that no one trusts data from. There are others that everyone trusts. I was taught to guard your lab's reputation with everything you've got because that's what makes or breaks you in the end. My belief is that you have to guard your own integrity. Make sure that the stuff you do isn't tainted and... well... outlive the bastards.

  • BugDoc says:

    Alex@#14: "For instance, what sorts of image processing steps, or noise-reduction steps, cross the line from teasing out subtle features to actually removing or altering information? Or, are there cases where a data point should be excluded because something weird happened in the experiment?"
    There are some pretty clear guidelines for addressing these sorts of questions, like what is acceptable image processing. A feature piece in JCB 2004 laid these out nicely: http://jcb.rupress.org/cgi/content/full/166/1/11. Similarly, there are statistical methods for determining which data points are outliers. Many PIs may assume that trainees will have the common sense and integrity not to inappropriately manipulate data, but it's probably a good policy to explicitly state to each person as they join the lab that each scientist is responsible for saving unmanipulated image files or raw data, and to clearly document how images or data are processed for publication. There should then be no excuse of ignorance not to have the raw data available if questions come up. This won't stop people who are determined to make up data, but hopefully will deter people from "wishful data enhancement" because they think that it might be okay.

  • Alex says:

    Bugdoc-
    Thanks for the link. Good article.
    Part of the reason why I regard some of these things as hard questions is because a big part of my current research is the development of new image processing algorithms to answer specific questions with new imaging techniques. For some of these new problems, it's not really clear which image processing approaches are actually going to yield valid information. So for me, a person who excludes everything over a certain threshold may or may not be doing the right thing, and until we know more about these problems we really can't say. (Trust me, this isn't easy. I just published a paper chock full of equations trying to sort out certain issues, and there's another paper in the works with even more equations and a bunch of simulation data.)
    So I'd say that if a person is using established techniques then there are indeed clear guidelines (as you linked to) for what is or isn't an acceptable image processing step. But if a person is working on new techniques, then there may not be clear guidelines, and the only honest approach is to analyze the data several different ways, see what comes out each way, and compare those results with other approaches to try to get to the bottom of it.

  • Anonymous says:

    "In my experience, the good scientists tend to outlive the bastards and the simple (superficial) reward-allocating processes tends to diminish in importance over the long-term (long meaning >10 years)."
    This assumes that the good scientists actually makes it into the academy. Otherwise this could be a selection effect -- if you are good enough to succeed on pure merit that is a great long term advantage. But do note that the "bastards", as you call them, actually have labs and research programs.
    What we really want to know is what is the effect of removing the honest scientist who was triaged (either by the large journals or the granting agency) on science. After all, this person isn't present so we can't actually observe their outcome.

  • An anonymous reader says:

    There seems to be some unnecessary handwringing here regarding data analysis. Alex: I'm sure you know this, but it doesn't matter whether you are using old techniques or new techniques. You can take data, turn it leftside out, shove it up your butt, spit it sideways, and divide by the square root of pi. It doesn't matter. You can transform data any way you want as many times as you want, as long as you do the exact same transformations for both control and test data sets. Good science is really pretty straightforward. As I said above, the problem is that people are taught less how to do good science these days, and more how to cajole editors and tweak the grant-getting process (c.f. this blog).
    Science 101:
    1) Frack with your data all you want, but do the same exact fracking for both control and test data sets. If you don't know what I mean by 'control' and 'test', go back to junior high science class.
    2) Always have a negative control.
    3) Always have a positive control.
    4) Remember that you can only disprove a hypothesis or fail to disprove a hypothesis; your data do not ever 'support' a hypothesis (despite what you read in C/N/S).
    That's pretty much it. Follow those rules and it's tough to go wrong. Some of you may be worried that the rules don't seem to apply to the 'science' you do. In which case you probably aren't really doing science. Descriptive stuff and technology development and processing are all very worthwhile endeavors that I applaud. But they are not necessarily science. Don't get confused. Don't let anyone confuse you. If you are doing science, the rules above should be applicable.
    Again, sorry to be a pedant, but it never hurts to remember the basics. Feel free to have them tattooed on your forearm.

  • CC says:

    That's pretty much it. Follow those rules and it's tough to go wrong.
    For starters, you forgot to correct for the multiple testing burden from all the "fracking" of data before a p less than 0.05 was found...

  • becca says:

    "The problem is that distinguishing "good science" from "bullshit" is much too difficult"
    Spoken as a man fully convinced that his bullshitting is the state of the art.
    *ducks*
    Actual question- Obviously, intent matters to our sense of justice and fairness.
    Obviously, people who manipulate data in ways that present a false picture are bastards.
    But ultimately, given that there are a lot of things one can do with some types of data that are in a grey zone, and given that you can't read intent from a blot band... for the science, for the good of human knowledge, does it really even matter if misconduct is committed? Because even without intent, mistakes will always be made.
    The only solution is to value data that are shown to be replicable (or, at least, explainable) if something is to be trusted. Everything must be verified. Isn't that the point of attempting emperical study of the natural world?

  • DrugMonkey says:

    Well, well, well. Lookee here.

    The Project for Scholarly Integrity, an initiative of the Council of Graduate Schools (CGS), seeks to advance the scope and quality of graduate education in the ethical and responsible conduct of research. Supported by the Office of Research Integrity (ORI), CGS has made awards to seven institutions participating in five projects, each of which is developing and assessing educational models that promote responsible scholarly conduct. This site serves as a tool for sharing ideas developed in these projects and as a clearinghouse of resources relevant to graduate deans and other university administrators, faculty, researchers, and graduate students. The resources on this site address curricular needs across a wide range of topics typically covered in responsible conduct of research (RCR) education and training. The site also addresses broad ethical issues, such as the ethical obligations of universities, as well as strategies for institutionalizing changes in the research environment.

  • Alex says:

    Anonymous Reader (#27)-
    You're absolutely right. However, even if you do all those things, there are still plenty of dangers. A sophisticated data analysis tool in the hands of a poorly trained user can do all sorts of weird things with weak signals, artifacts, and noise. If there's any difference at all (even a very small one) between the control and test data sets, a poorly applied mathematical tool can produce all sorts of results that may not mean a damn thing.
    I had my student generate some simulated data and do something that is basically fancy curve fitting, to test an algorithm that we're developing. When you fit to background noise (negative control), you get nothing (as expected). When you fit to something with a very high signal/noise ratio (positive control) you get the expected result. When you fit to an intermediate signal (matching the likely experimental conditions) you get one parameter value dead-on, and the other one is way off. An unintelligent user could compare the 3 cases, see the expected results in the negative and positive controls, and then put a totally wrong interpretation on the test case.
    The algorithm has its uses, but it also has certain drawbacks. We think that it might be useful in certain cases, but we've busy quantifying its strengths and weaknesses, and getting hard numbers on what does and doesn't work.

  • an anonymous reader says:

    CC (#28): Too many students in science these days are not mathematically or (especially) statistically trained. The first semester of statistics typically teaches nothing. It's really not until the second year of statistics, or the graduate level, that experimental design and data transformations are properly taught. I am not a statistician, and in fact wholeheartedly subscribe to the philosophy: 'If you need more than minimal statistics to understand your data, then your experimental design sucks.'
    That said, CC, my rules still hold. Frack away with your data all you want. It's OK. If you type 'data transformation' into Google, you'll have lots to read, including this page: http://udel.edu/~mcdonald/stattransform.html
    Now, if you are talking about violating the assumptions of a particular statistical test to get a P value less than 0.05, then go back up to my post above where I talk about the difference between figuring shit out and simply trying to get published.
    Alex (#31): It is interesting that you bring up curve-fitting. My rules still apply. With regard to curve-fitting, however, keep in mind what people usually do:
    1) They painstakingly collect data.
    2) They fit some curve to that data, which may or may not exactly fit.
    3) They base their conclusions on the curve, not the data.
    This is the equivalent of seeing four dollars in your wallet, saying 'Well, that's pretty close to five dollars', and then expecting to buy something that costs $4.99. The problem here is not data manipulations or curve fitting; the problem is that somewhere along the line scientists become convinced that it's OK to substitute imaginary data for the real data.
    Curve fitting can obviously be a useful tool for data simplification or trend extrapolation. But don't be fooled: Your fit is not your data.
    Most often, this sort of curve-fitting bogosity shows up with everyone's favorite fit: the exponential fit. Except true exponential relationships are not as common as people think. But it doesn't stop people from trying to jam their data into the equation, getting some semi-fitty fit, and then launching into all sorts of shenanigans with the exponents.
    So, with regard to this thread, I am more and more convinced that the problem is not so much dishonesty, but rather ignorance. Thanks to this thread, I am going to start working on a book for students that will be called something like 'How to do science: A beginner's guide' or something. There are some publishers bugging me for something lately anyway, and I know one prominent publisher that loves shit like this. Feel free to suggest better titles, and chapters.

  • Science 101:
    1) Frack with your data all you want, but do the same exact fracking for both control and test data sets. If you don't know what I mean by 'control' and 'test', go back to junior high science class.
    2) Always have a negative control.
    3) Always have a positive control.
    4) Remember that you can only disprove a hypothesis or fail to disprove a hypothesis; your data do not ever 'support' a hypothesis (despite what you read in C/N/S).
    That's pretty much it. Follow those rules and it's tough to go wrong.

    That's all good, and definitely helps avoid a lot of trouble. But what it ignores is the very thorny issue of excluding from analysis "bad data" and "failed experiments", which frequently require judgment calls and cannot be decided on the basis of mechanical application of explicit rules. This is frequently the case in physiological measurements that have to be performed one at a time, thus often precluding internal negative and positive controls.

  • DSKS says:

    "This is frequently the case in physiological measurements that have to be performed one at a time, thus often precluding internal negative and positive controls."
    True dat. We're currently wrestling with the problem of an appropriate experimental design (or should I say least inappropriate?) right now for precisely these reasons.
    That said, in spirit, Ze Rulez laid out by AnAnonymousReader are sound as a pound.

  • neurolover says:

    "In my field, part of the problem is that separating out bad study design from malice is very, very hard. All options are imperfect and strange results can arise due to bad luck or hidden variables that can't be easily measured. It's a massive problem when you can't do direct replication (is it fraud or a different population?)."
    Me too, JD. And, that realization has gotten a fair number of people believe that those who publish in the high profile journals are the ones who don't obsess about the gray areas, so that there's an active selection in favor of, not dishonest, but those willing to skate on thin ice and make strong claims while ignoring potential caveats and confounds. Yeah, replication is supposed to resolve the problems, but replication is rare.

  • Alex says:

    Anonymous Reader-
    Again, I completely agree that you can't just do a curve fit and run with it. However, that doesn't change the fact that there are hard questions, and it isn't always easy to figure out what analysis technique is the best. Applying the wrong technique, or applying the correct technique but interpreting it incorrectly, can lead to all sorts of erroneous conclusions. This sort of fracking with the data isn't really deliberate fraud, but it is bad science.
    And the mathematical issues in my technique are turning out to be non-trivial.

  • an anonymous reader says:

    Comrade (#33): That's a good point for which I don't have a good answer. I definitely need to give that some thought for the book. If you have suggestions, or anyone else wants to chime in with regard to this issue, I'd love to hear the ideas. My gut reaction is that data can be thrown out for any reason unrelated to the value of the data itself. For example, you can throw out data if the subject died or you have independent evidence that a reagent was questionable or a piece of equipment malfunctioned. But you can't throw out a data point simply because it was 'weird', even if you assume it is weird because the subject died or a reagent might be questionable or the equipment might have malfunctioned. And you have to stick with your rules. If you think a reagent was bad enough to screw up one data point, then you must assume it was bad enough to screw up every data point for which it is associated, whether you like those data or not. Regardless, I agree that in practice these can be tough calls. But I am unwilling to accept that we can't come up with some helpful guidelines.
    On another note related to the original thread topic, I noticed that my emails from Health & Human Services' Office of Research Integrity (ORI), have started including the names of guilty people right in the email. Previously, you had to click the link to read the names and sordid details. I think it's a good move by ORI to more publicly shame these people. If you get these emails, and also like this new policy, reply to their email* and let them know. I did.
    I also included in my email a suggestion that whistleblowers be featured as heros. You may also want to include a suggestion along these lines in your email. As I said above, I think this will also help promote a culture where misconduct is not acceptable.
    *Depending on your email program and server, simply hitting 'reply' may stick in a bogus address. To be safe, reply to: AskORI@hhs.gov If you don't get emails from ORI, and want to, or if you want the info without getting on an email list, go to their website: http://ori.hhs.gov/

  • DSKS says:

    Yeah, replication is supposed to resolve the problems, but replication is rare."
    That's a topic worthy of a discussion all of its own, right there.

  • an anonymous reader says:

    Neurolover (#35) and DSKS (#38): Replication. Hmmmm. Yea. Maybe I need another Science 101 rule:
    5) Assume that some factors affecting whatever you want to measure are out of your control. Measure the variability that arises because of this, and then control for it.
    There are, of course, two ways to control for variation: a) better experimental design, and/or b) statistics. But there is only one way to measure variation, and that is through replication.

  • neurolover says:

    So, an anonymous reader, design an experiment for us based on all the rules, to study the question of whether prisoner releases result in additional crime, the subject of a recent freakonomics column.
    http://freakonomics.blogs.nytimes.com/2009/02/10/the-great-california-prison-experiment/

  • JD says:

    RE: #39 and #40: One thing to consider is that there is a massive divide between "bench science" and "human subjects research". Rules of ethics and practical considerations often make this type of research extremely complicated to do. But, on the other hand, proving that results from bench science are applicable to actual people is key.
    Now you could try and draw a distinction between types of science here but these debates tend to result in everybody being a physicist. 🙂 I've been in physics and even in the cleanest of fields errors can be hard to find. We once had a result that looked like we had found something that would be really cool and was the subject of hit debate. Fortunately, we checked long enough that an error was discovered in the coding of the analysis software before anything was published. More annoying, several control materials showed correct results despite this bug. So even bench science can have these sorts of strange results and, when you are in a competitive race, it strains nerves on all fronts to start the process of questioning the tools.
    But in human subjects you have different people each time and often quite distinct populations. You can't breed "people lines" the way you can "mouse lines". It is a very complicated problem and means that it can be very hard to interpret discordant results between careful experiments.

  • qaz says:

    Anon #37 - the problem with throwing out the data for any reason is that (at least in most experimental science) there are lots of reasons something didn't go perfectly. The problem is determining when to throw something out. Not why you threw it out. The danger is that when the data doesn't fit one's preconceived notions, one often starts looking for "why not". But if it does fit the preconceived notions, one often says "ok". I'm not suggesting any fraud here. I'm just saying that it's not so clear cut.
    A great example is the mass of the electron. (See the quote from Feynman on the wiki site on the oil-drop experiment.)

  • an anonymous reader says:

    Freakonomics is cool, Neurolover, but it is not necessarily science. Read the second-to-last paragraph of my post #27 again.
    But let's say you actually want to know 'whether prisoner releases result in additional crime', and are some sort of crazy governor with the wherewithal to do the experiment. Easy peasy. Just like seeing whether MDMA causes an increase in striatal neuron death, or whatever.
    1) Define 'additional crime'. Presumably, this means any amount of crime significantly above the 'normal' level of crime. So you measure 'normal' amount of crime, including variation. That's what we're going to compare the levels of crime in each experimental condition to. Note that you specified that we are looking for additional crime, so we can use a one-tailed t-test (assuming a t-test applies; it may not). If there is a drop in crime, you are hosed and won't 'officially' detect it. But you came up with the hypothesis, not me.
    2) Now we need to come up with our experimental groups. Ideally, we have several identical regions of interest into which we can release our prisoners. But that's obviously not going to happen. So we are going to have to do things sequentially. This means that our 'normal' amount of crime may be drifting. But that's OK, we can measure and account for that.
    Anyway, here are the experimental groups:
    1) No prisoner release. This is our negative control.
    2) Go out and commit crimes or hire nonprisoners to do so. This is our positive control to make sure, among other things, that an increase in crime would actually be detected.
    3) The prisoner release. This is our test.
    Then you measure crime rate and see if conditions #2 and #3 were significantly above 'normal' crime rate while condition #1 was not. Per my rule #1, above, you can measure crime rate any way you want but you have to do it exactly the same way for all three groups. And per my rule #4, the null hypothesis is that prisoner release makes crime rates stay the same. Keep that in mind. If you want it the other way around you need to have some sort of a priori hypothesis about the levels to which crime levels might rise. And per my new rule #5, you need to do this all over again in a bunch of states or cities, at different times of year, etc to be sure.
    Make sense?
    Now I'm gonna go read the freakonomics article you cite and see what Steve says. I love his stuff, even if it isn't science.

  • JD says:

    "Freakonomics is cool, Neurolover, but it is not necessarily science. Read the second-to-last paragraph of my post #27 again."
    This helps for some areas of biomedical research but it fixes the problem by defining areas of study that have issues out of the field of science.
    I like Karl Popper's definition:
    "One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability."
    There are many area of biomedical research that fit this definition of science but for which your formulation has issues. For example, in post #43, your positive controls would never pass ethical muster (and if you can't pass ethics you can't be in a journal, either). But that doesn't mean that you can't test this hypothesis and try to disprove it.
    Now, I agree, that in areas where it is possible to do tight experimental control it should be utterly mandatory. Where it is not the weaknesses need to be highlighted and we need to do our best. In the end, I want to apply biomedical results to improving health and the last step involves people.
    But hard doesn't mean impossible.
    On the other hand, I sometimes think we should have color codes on human subjects research (like we do on chemicals) for which elements of good practice could not be applied to this area of research due to underlying limitations.
    🙂

  • an anonymous reader says:

    ""One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability."
    Exactly. If my rules don't apply, then these attributes don't apply. It ain't science.
    You can't just say something is science because you want it to be. That's the problem here -- too many people doing intentional or unintentional crap they call science but which isn't really science.
    Arguing that my Science 101 rules don't apply is like trying to call a truck a kangaroo and then saying I don't know what a truck is because none of the trucks I point to are hopping.

  • DrugMonkey says:

    Why do I suspect that people who are constantly on about what "is science" and insist that it comport with some high minded ideals, throw around sneers at "descriptive" work and the like have trouble convincing anyone else outside their bunny hopper circle that their preferred work is actually interesting or relevant to anything?

  • Andre says:

    You can't just say something is science because you want it to be. That's the problem here -- too many people doing intentional or unintentional crap they call science but which isn't really science.

    I would say the opposite is equally likely to apply: you need to be careful what you exclude from capital-s Science because it doesn't fit the particular standards of your field. That list excludes some things I think most scientists would want to include. What about astrophysics? We can't (yet) manipulate the stars, but I still want to include studying them systematically as part of science. What was the appropriate negative control when Newton was working out universal gravitation? Drop a non-gravitating apple to confirm it doesn't fall to Earth? Or in biology, were Watson and Crick doing science? I think it's fair to say that Franklin's X-ray data "supported" their hypothesis* that DNA forms a double helix.
    *If you want to call it a hypothesis. Perhaps their research was "merely descriptive?"

  • Andre says:

    DM in #46, right on!

  • whimple says:

    Yeah, replication is supposed to resolve the problems, but replication is rare.
    Replication happens all the time, it is the reporting of the replication that is rare.
    Case study: we got two cell lines recently from another lab. The lines are: 1) line that is a well-known to be innately missing protein X, but stably complemented with an expression vector for protein X. 2) the negative control: same line stably transfected with the empty expression vector. There is a complicated-to-do functional assay to test the complementation, or you can do a western blot and look for expression of protein X in the complemented line, but not the vector control. We have tried very hard to detect protein X in the complemented line, but have never seen it there. Sounds bad right? We were worried, so we went and did the complicated functional assay. Low and behold, the complemented line in which we can't detect protein by western is in fact functionally complemented. We are also certain that the two lines are in fact derived from the same parental line. So it's all good. We're using an experiment done with these lines in a paper we're submitting. We're not going to come out and say, "Check it out! We (mostly) replicated this data!", but we did nonetheless.
    A more complicated question that I'm glad we don't have to answer would be what would we have done when we replicated the experiment if we had NOT been able to replicate the functional result. I'm not sure you would have seen that formally reported either, although you probably would have heard about it over a few brewskis at the next meeting.

  • an ex-anonymous reader says:

    I really should not be rising to the bait, but...
    DM (#46). I am not sneering at descriptive anything. Descriptive stuff is awesome. In fact, I'd go so far as to argue that if we had to make a choice between description and [what I define as] science, we should choose description. I never said experimental science was the only truly worthwhile technical or philosophical human endeavor. Don't put words in my mouth. Or typing under my fingers, as it were.
    Saying science is hard and fuzzy does not generate advice on how to better figure shit out. Which, remember, is the whole point. It also doesn't help define what is and is not acceptable scientific practice. Which is what this thread began by discussing. In short, populist whiny appeals for inclusion do nothing very useful at all. So stop it. You can do better. You usually do.
    Whimple (#49): Maybe I am misinterpreting what you are describing, but I'm not sure I would call what you did an example of replication. Rather, it sounds like you were simply trying to verify that your experimental conditions were what you thought. The only 'experiment' you seem to have described is the "complicated functional assay", presumably designed to test whether or not your expression vector expressed. It sounds like it did, N=1. Now do it again to make sure.
    Or are you saying that you saying that your data are consistent with the other lab's conclusion that the expression vector expresses? I'm still not sure that's replication, unless they you are basically copying what they did.
    The way I am using 'replication', I am talking about measuring variation. I think you might be using 'replication' in a different way. That's fine, but it speaks to the problem that there is very little consistent training in science 101, even among practicing scientists. We should be able to talk about this stuff without disagreements about definitions (what is science? what is replication?) or confusion.

  • DSKS says:

    "I think it's fair to say that Franklin's X-ray data "supported" their hypothesis* that DNA forms a double helix."
    You could say that, but it wouldn't be helpful because every cause-effect relationship can be described by a multitude of models, all but one of which will be incorrect inre the actual mechanism (some can be outrageously incorrect, and still make accurate predictions). What is helpful about Franklin's data was not that it "supported" the double helix hypothesis, but that it falsified a whole slew of alternative hypotheses.

  • Alex says:

    There are a lot of very strict statements made by all sorts of people about what is or isn't the One Right Way to Do Science. However, not every question that can be asked about nature is amenable to the most direct application of a very strict set of rules. Now, that's not to say that less rigorous investigations should be given the exact same weight as, say, a very direct observation. Plenty of things are only amenable to difficult, indirect observations, and some observations will rule out fewer explanations than others. That's why all scientific knowledge is provisional, subject to constant testing. That's why all statements about nature have lots of caveats tucked in somewhere. This is one reason why in physics you will see even long-standing theories subject to new experimental tests in new situations or by more direct methods, and those tests will be published in Physical Review Letters (our version of Cell, basically).
    Take astrophysics: You can't do a controlled experiment there. You can only observe, and compare those observations with the predictions of different theories. Or take a lot of sciences with historical components, like geology, paleontology, cosmology, etc. Again, you can only do observations, many of those observations are highly indirect, and everything has to be compared with a model.
    Now, I'd like to test a hypothesis of my own. I have observed that in many (but certainly not all) cases of people expounding on very strict sets of rules for what can or can't be called science (especially if those rules are so strict that a lot of investigations being done aren't really science), the people making these statements have all sorts of interesting things to say about one or more of the following:
    1) climate science
    2) evolutionary biology
    3) cosmology
    4) geology
    Does the anonymous reader have any thoughts to offer on those areas of science?

  • DrugMonkey says:

    5. Psychology
    6. Sociology

  • Andre says:

    DSKS, I would agree with you in a lot of cases, but the reason I chose X-ray crystallography was that there's a correspondence between the data and the answer (although perhaps I chose badly since I suppose that wasn't as much the case in Franklin's time). It's reasonable to think of X-ray data as supporting a particular structure. If I open a box and see an egg, that supports the conclusion that there's an egg in the box. I would say that's a useful way to phrase it as opposed to speaking of falsifying a gazillion hypotheses about what else could have been in the box. I didn't (and don't) mean this in the strictest possible philosophy-of-science sense, but if I read a paper that talks about supporting a hypothesis with certain kinds of data I don't immediately think "that person doesn't understand science 101." (although, admittedly, I tend to favour the softer statement "data X is consistent with model Y.")

  • Modern macromolecular crystallography is nothing like Watson-Crick-Franklin. Protein crystallographers don't "exclude alternative structural hypotheses". Rather, they seek a structural model that best fits both the primary sequence of the protein, the crystallographic diffraction pattern, and any other structural constraints that are available based on other experiments, such as symmetry, multimerism, etc.
    In relation to "what is science", damn what a boring useless discussion.

  • S. Rivlin says:

    If I may; as I read the different comments here, my feeling is that several commenters are somehow trying to qualify uncertainty in reading and interpretation of data as a justification for what seems to be a scientific misconuct. Unfortunately, this is exactly what the fraudsters among us are relying on when they knowingly plan and then commit their unethical act.
    Moreover, again from my own experience, these fraudsters would pretend to be the real soldiers in the fight against scientific misconduct. For instance, the chairman who committed plagiarism as described in my book, sat in the front raw of a presentation about ways to battle scientific misconduct among graduate students, a presentation given at an annual conference of chairs of life sciences departements. In the Q & A session afterwards, he specifically made a comment implying that, in his department, efforts are not always successful in combating the problem due to lack of university-wide policies regarding this problem. The bastard knew that the proceedings of the conference, including the presentation and the Q & A will be published and that his name and his comment will appear there.

  • an anonymous reader says:

    Regarding #52 and #53:
    I have all sorts of interesting things to say about the scientific method. (At least, I hope you're finding them interesting. Book sales will suck if this isn't interesting -- or at least useful).
    If you think you're going to trick me into spouting idiotically about the current state of knowledge in fields not my own, then you're wrong. Science can be applied to many sorts of questions in many different areas of interest. It's a method, not a subject matter.
    If subject matter defined fields, we'd all still be wondering how animus worked instead of using biochemistry.

  • Alex says:

    Fair enough, anonymous. I misread you. Most of the online lectures I get regarding the scientific method are from creationists and global warming denialists. I jumped to unsupported conclusions.

  • [quote]It's a method, not a subject matter.[/quote]
    Actually, I am convinced it's neither. It is an attitude.

  • qaz says:

    Anonymous Reader #57, please make sure you get a chance to read "Just a Theory" by Moti Ben-Ari before you set out to write your book. [I have no relation to the book. I just think it's a great book on this topic.] I think that the simple hypothesis-disproof Popper-ism is a nice goal, but has very little to do with actual scientific progress. While it is true that one can only disprove theories, in fact, acceptance or rejection of theories by the scientific community end up having very little to do with disproof and much more with explainability. But now we're talking the Sociology of Science, which is a very different beast from where we started. I agree with S. Rivlin #56. We need to separate the argument of proper scientific method from the difficulty in ensuring there are no mistakes in our own data from outright fraud.
    The real thing that ticks me off is that these frauds are so obvious. When I was teaching computer science, I would occasionally see someone turn in someone else's program with the someone-else's name still in the comments. I mean, if you're going to cheat, at least show some intelligence about it!

  • S. Rivlin says:

    qaz,
    Don't fool yourself to believe that the fraudsters are not intelligent and smart. Those who get away with their deeds are more intelligent and smart than the majority of us. These are the real criminals of science and their sophistication is ahead of those who try to police scientific misconduct. For now, the only information about scientific fraud comes from the field, from the scientists who are willing to take the risk and become informants. Unfortunately, those who suppose to act on this information i.e., colleagues, administrations, ORI, are all acting very slowly if at all.

  • niewiap says:

    @ S.Rivlin #61
    Funny you compare scientific misconduct to criminal activity. I have very recently posted about scientific misconduct as a crime on my blog, and so did writedit in relation to the recently discovered case of misconduct by an ex-MIT professor Luk van Parijs. It is absolutely true that a lot of misconduct is really hard to discover and can only be revealed through failure to confirm the results by other labs. Of course these labs are at an obvious disadvantage, especially if the original bs was published in a C/N/S journal, and so it often takes years and hundreds of thousands of dollars to reject the falsified results.

  • S. Rivlin says:

    niewiap,
    Read your post and agree with you completely. I remember Baltimore's case very well and believe that he should bear some responsibility for that fiasco.

  • neurlover says:

    "[quote]It's a method, not a subject matter.[/quote]
    Actually, I am convinced it's neither. It is an attitude."
    Nice paragraph on this from Olivia Judson's "Wild Side" blog/column at the New York Times:
    "Science is . . . an attitude, a stance towards measuring, evaluating and describing the world that is based on skepticism, investigation and evidence. The hallmark is curiosity; the aim, to see the world as it is. "
    http://judson.blogs.nytimes.com/
    Her guest columnist right now is Stephen Quake, writing about the "special" grants that freed him from the grant grind and allowed him to think out of hte box.

  • DSKS says:

    Re #54, Andre "I didn't (and don't) mean this in the strictest possible philosophy-of-science sense, but if I read a paper that talks about supporting a hypothesis with certain kinds of data I don't immediately think "that person doesn't understand science 101." (although, admittedly, I tend to favour the softer statement "data X is consistent with model Y.")"
    On the issue of surrendering to pragmatism inre communication in some instances, I agree with you.
    My own motive for tending to champion the "OMG teh scientific method!" POV is only in small part due to the the occurrence of bad practice in professional research (which I like to hope represents a small portion of the research conducted day to day). My main concern is with the current vulnerability of the public to quackery posing as science. Quackery, pseudoscience - call it what you will - thrives on pseudo-profundity, and touchy-feely definitions of material science ("attitude" would be a term that v. much appeals to the pseudoscientist, who desires to substitute objectivity with subjectivity). A key strategy of the current incarnation of anti-materialism is to undermine the definition of science and its associated terminology, render it vague and obfuscatory, and essentially turn it back into the a priori inductive enterprise it once was during the Dark Ages.
    I suspect that individuals who resist the vaguest suggestion of dogma inre Teh Method nevertheless express significant antipathy towards, say, proponents of Intelligent Design who like to play fast and loose with the scietnific definitions such as "theory" (which is butchered enough as it is by professional scientists who persistently confuse the term with "hypothesis"). Is it pedantry to object to science falling for such public and easily exploited double standards? Perhaps, but I'd argue that the current conflicts in school boards across the nation, the considerable profits of the "Alternative Medicine" industry, and the fact that a sizable section of the population still believes in astrology justifies the need for some tangible standards.
    "Tangible", but not necessarily set in stone. The idea that appealing to a set of basic rules is tantamount to declaring cosmology pseudoscience is taking a stick to a straw man, IMHO. As it is, every one of the following,
    "1) climate science
    2) evolutionary biology
    3) cosmology
    4) geology"

    is amenable to, and whenever feasible adheres to, the principles of the hypothetico-deductive model. Yes, controls are sometimes unnecessary or impractical, and I think AnAnonymousReaders Rulez were perhaps overly biased towards life sciences in that respect (although as Physioprof explained, the manner in which they are implemented in the life sciences is not necessarily straightforward either). But in all of these areas of science, the limitations are (or should) be understood by the investigators and taken into account when deriving their conclusions from their observations.
    Regarding the philosophy underlying appropriate scientific method, and the legitimacy of an emphasis on falsifiability, it's worth following the emerging furor over superstring "theory".

  • an anonymous reader says:

    Nicely expounded in #67, DSKS.
    Tooting the glories of 'good scientific attitude' is all well and good, but it isn't very practical.
    The AAR Rulez are designed to be useful to beginning scientists, nonscientists, practicing scientists who need a little reminder now and then, and reviewers & editors confronted with a slickly-written piece of confusing biological propaganda. I recognize that it may not always be easy to follow them. Tough. No one ever promised that science would be easy. The rigid difficulty of it is what leads to such gloriously enduring insights.

Leave a Reply