Responsible Conduct of Research: Behaviorism to the Rescue (at last)

The Health and Human Services (within which is the NIH, if you didn't know) Office of Research Integrity blog has an entry of considerable interest to YHN. In musing on the integration of Responsible Conduct of Research (RCR) education with Bioethics Education, the author arrives at the crux:

...researchers may be faced with everyday ethical decisions that can create personal inner conflict. For instance, as competition for resources increases, a researcher might feel he or she has to make a decision that weighs the good that comes from objectivity in research against breaking a rule, for instance exaggerating preliminary data in a grant application, for the good of possibly sustaining his or her research program and staff (but also at the risk of losing everything).


The entry then ends with a query and I encourage you to go over there and comment if you have a response.

What educational programs, what research environment can be fostered, at our institutions to help researchers with such decisions, or better yet, to help them never have to make such decisions? Perhaps there is something that could be gained through greater integration of RCR education and bioethics education? What are your thoughts?

My thoughts have been expressed before, generally as commentary over at Janet's place. But to recap, I believe some simple truths.
-Scientists do not get into this business because they want to lie, cheat and steal.
-They are perfectly aware of RCR and ResearchEthics 101 material.
-The typical instructional examples, not to mention the average ORI findings of misconduct cases, are laughably obvious.
-They start down a slippery slope of dubious scientific behavior because of the contingencies that are put in place within the business of science.
-If we want to minimize research misconduct, changing those systematic contingencies is going to have far more impact than is repeated, duplicative and trivial (and occasionally ridiculous) education of individual scientists.
Soo...where could we get the most bang for the buck? Well writedit refers to the dismantling of soft-money employment (where a person of professorial rank at an institution depends for his/her salary entirely on the grants that are obtained) as one target. I agree that this would help. It would put entire research institutes and quite a number of University Medical School departments out of business, true... but it would help.
Doing away with GlamourMag science. The obsession with first-over-all-else and all of the related baggage (i.e. fear of scoopage) is a major driver of incremental pedestrian misconduct. This is relatively easy to do. As a start, take every place where promotions and tenure committees explicitly look at GlamourMag metrics and get rid of them. I suggest field-specific (and time-amortized) actual citation measures as a quick replacement if so-called 'objective' checkoffs are required. Next, we can start prioritizing regular output, first authorships for all trainees and a couple of other things that are endemic to the better real labs and antithetical to GlamourLabs.
Crediting failure. As SciCurious recently lamented, negative data are essentially unpublishable. PhysioProf can be frequently found encouraging people with his observations that the process of science is dominated by failure and screwups as an essential feature of progress toward successes. The trouble is that failure can be because of incompetence and laziness and so we tend to take the short cut of only paying attention (and crediting) successes. Don't get me wrong. Nobody wants to read your bloody lab book in all it's excruciating detail of failed experiments. But we need to move back toward a point where it is acceptable to put more of the screwups into the system. Perhaps by getting rid of this antiquated space-cramping and allow longer methods sections. Perhaps by making methods articles more common and respected. This also has the benefit of helping other scientists avoid your mistakes or to profit from effort expended, I'll note.
Accepting that pretty data are not better data. I have had situations in which a paper reviewer was unable to grasp that eyeballing the error bars is far inferior to the appropriate inferential statistics. Furthermore that it is, IMNSHO, actually unethical when it comes to animal studies to run additional subjects solely to make the error bars smaller (or for that matter the p-value smaller) so that the graph will look "better". I also have this argument occasionally with people who's stock in trade is the image-based data point of blots and gels and whatnot. There is most certainly a pressure (to the point of making an manuscript publishable/not publishable) for the figures to look sufficiently "pretty"- regardless of the legitimate scientific interpretive value. Even a layperson can occasionally see this at work by viewing the comparatively crappy looking "corrected" panels replacing a supposedly "inadvertent place holder figure" that somehow made it through three rounds of review and revision at a very high profile GlamourMag...
Grad student (and post-doc) whistle-blower protection. You bust the PI for misconduct or data fakery and you are out of the business. That's the current perception and the current reality. In the infamous Wisconsin case it was noted that the whistle-blowing trainees were left essentially career-less, despite the occasional refutation of any bad impression of their own work. I haven't seen any followup but the initial report really made it sound as if these trainees were screwed, futurewise. What are current trainees to think about this case if they themselves are faced with a whistleblowing situation? Keep your mouth shut and get the hell out. Not a good message if you want to change the systematic contingencies, is it?
I return to my essential point. Scientists know what making up data is and that they should not do it. Education on the most obvious cases is not the answer. Scientists slide into data fakery and the like because of the contingencies of their careers.
Obviously, this is not to take all blame away from the individual. After all, there are plenty of scientists laboring under the same contingencies who do not fake data and do not tolerate* those who do. But if we are talking about science in general or our scientific subfields....personal culpability is not the point. We want to create a structure of our industry that leads to more confidence in the output. Witch-hunting individuals is just not going to have as much impact as changing the contingencies.
This is behaviorism. The OldeSkoole approach in experimental psychology that was essentially uninterested in fuzzy psychodynamic concepts of behavioral motivators which were popular at the turn of the last century. Behaviorism focused on the knowable environmental contingencies affecting an individual that either enhance or decrease the probability or rate of future behaviors. Hypothesize a contingency of interest, systematically change it and observe the effect on behavior. If the contingency didn't seem to change the behavior of interest, move on to something else. Simple.
Well, most every academic trainee gets exposed to at least one session of formal RCR training and I've yet to see the analysis that suggests that all confirmed fakers were somehow the exceptions. Let's move on to something else, eh?
__
*yeaaah. ever been around a large group in which someone is whispered to be a known faker because they've never had an experiment go wrong and they always come up with support for the hypothesis the PI threw out randomly two months ago at lab meeting?
[h/t: writedit]

3 responses so far

  • hibob says:

    After this study came out:
    http://jama.ama-assn.org/cgi/content/short/302/9/977?home

    Of the 323 included trials, 147 (45.5%) were adequately registered (ie, registered before the end of the trial, with the primary outcome clearly specified). Trial registration was lacking for 89 published reports (27.6%), 45 trials (13.9%) were registered after the completion of the study, 39 (12%) were registered with no or an unclear description of the primary outcome, and 3 (0.9%) were registered after the completion of the study and had an unclear description of the primary outcome. Among articles with trials adequately registered, 31% (46 of 147) showed some evidence of discrepancies between the outcomes registered and the outcomes published. The influence of these discrepancies could be assessed in only half of them and in these statistically significant results were favored in 82.6% (19 of 23).

    I'd like to see some teeth added to the rule that trials need to be registered before they are completed. How about:
    -primary goals must decided on and put in the registration before any data is unblinded. Those primary goals have to be discussed in the eventual publication; if any outcomes are added after data start coming in the paper needs to discuss how that timing affects the statistical power of those outcomes as well. If they can't do that, principal investigators need to be suspended from doing clinical trials.
    -IRBs need to check that the trials they approve are properly registered, and that the eventual publications meet the requirements outlined by the registrations. If their IRBs can't do that, institutions need to be suspended from conducting clinical trials.
    -Journals need to retract articles that aren't properly registered. Not sure how to force Elsevier to do anything other than through public shaming, so that will have to do.
    These are people getting exposed to experimental drugs and medical techniques for the sake of science, folks. If science isn't actually getting done, the consequences need to be severe.

  • Your main point is obviously correct, but your suggestion that changing the culture of allocation of credit would be "relatively easy to do" by just "get[ting] rid of" aspects of the current culture that encourage misbehavior is fucking absurd. In fact, your suggestion violates the whole point of your post, which is that telling people what to do and what not to do has absolutely no effect on their behavior.
    The only thing that is going to "get rid of" the aspects of the current culture that encourage misbehavior is establishment of a new set of incentives that punish this culture and reward adoption of a new one.

  • DrugMonkey says:

    which is that telling people what to do and what not to do has absolutely no effect on their behavior.
    Well I wouldn't be bloggin' if I adhered to an extreme formulation of this for everything, would I?
    The ethics is no-brainer. Everybody is exposed to some minimal level of the "don't fake data" message. I would argue a sufficient level that more of the same has no effect.
    When it comes to taking the mickey out of GlamourMag and GlamourSci behavior, well, there's an area where my particular viewpoint is FAR from common... I would argue just saying it has a greater chance for making a discernible impact.

Leave a Reply