The Health and Human Services (within which is the NIH, if you didn't know) Office of Research Integrity blog has an entry of considerable interest to YHN. In musing on the integration of Responsible Conduct of Research (RCR) education with Bioethics Education, the author arrives at the crux:
...researchers may be faced with everyday ethical decisions that can create personal inner conflict. For instance, as competition for resources increases, a researcher might feel he or she has to make a decision that weighs the good that comes from objectivity in research against breaking a rule, for instance exaggerating preliminary data in a grant application, for the good of possibly sustaining his or her research program and staff (but also at the risk of losing everything).
The entry then ends with a query and I encourage you to go over there and comment if you have a response.
What educational programs, what research environment can be fostered, at our institutions to help researchers with such decisions, or better yet, to help them never have to make such decisions? Perhaps there is something that could be gained through greater integration of RCR education and bioethics education? What are your thoughts?
My thoughts have been expressed before, generally as commentary over at Janet's place. But to recap, I believe some simple truths.
-Scientists do not get into this business because they want to lie, cheat and steal.
-They are perfectly aware of RCR and ResearchEthics 101 material.
-The typical instructional examples, not to mention the average ORI findings of misconduct cases, are laughably obvious.
-They start down a slippery slope of dubious scientific behavior because of the contingencies that are put in place within the business of science.
-If we want to minimize research misconduct, changing those systematic contingencies is going to have far more impact than is repeated, duplicative and trivial (and occasionally ridiculous) education of individual scientists.
Soo...where could we get the most bang for the buck? Well writedit refers to the dismantling of soft-money employment (where a person of professorial rank at an institution depends for his/her salary entirely on the grants that are obtained) as one target. I agree that this would help. It would put entire research institutes and quite a number of University Medical School departments out of business, true... but it would help.
Doing away with GlamourMag science. The obsession with first-over-all-else and all of the related baggage (i.e. fear of scoopage) is a major driver of incremental pedestrian misconduct. This is relatively easy to do. As a start, take every place where promotions and tenure committees explicitly look at GlamourMag metrics and get rid of them. I suggest field-specific (and time-amortized) actual citation measures as a quick replacement if so-called 'objective' checkoffs are required. Next, we can start prioritizing regular output, first authorships for all trainees and a couple of other things that are endemic to the better real labs and antithetical to GlamourLabs.
Crediting failure. As SciCurious recently lamented, negative data are essentially unpublishable. PhysioProf can be frequently found encouraging people with his observations that the process of science is dominated by failure and screwups as an essential feature of progress toward successes. The trouble is that failure can be because of incompetence and laziness and so we tend to take the short cut of only paying attention (and crediting) successes. Don't get me wrong. Nobody wants to read your bloody lab book in all it's excruciating detail of failed experiments. But we need to move back toward a point where it is acceptable to put more of the screwups into the system. Perhaps by getting rid of this antiquated space-cramping and allow longer methods sections. Perhaps by making methods articles more common and respected. This also has the benefit of helping other scientists avoid your mistakes or to profit from effort expended, I'll note.
Accepting that pretty data are not better data. I have had situations in which a paper reviewer was unable to grasp that eyeballing the error bars is far inferior to the appropriate inferential statistics. Furthermore that it is, IMNSHO, actually unethical when it comes to animal studies to run additional subjects solely to make the error bars smaller (or for that matter the p-value smaller) so that the graph will look "better". I also have this argument occasionally with people who's stock in trade is the image-based data point of blots and gels and whatnot. There is most certainly a pressure (to the point of making an manuscript publishable/not publishable) for the figures to look sufficiently "pretty"- regardless of the legitimate scientific interpretive value. Even a layperson can occasionally see this at work by viewing the comparatively crappy looking "corrected" panels replacing a supposedly "inadvertent place holder figure" that somehow made it through three rounds of review and revision at a very high profile GlamourMag...
Grad student (and post-doc) whistle-blower protection. You bust the PI for misconduct or data fakery and you are out of the business. That's the current perception and the current reality. In the infamous Wisconsin case it was noted that the whistle-blowing trainees were left essentially career-less, despite the occasional refutation of any bad impression of their own work. I haven't seen any followup but the initial report really made it sound as if these trainees were screwed, futurewise. What are current trainees to think about this case if they themselves are faced with a whistleblowing situation? Keep your mouth shut and get the hell out. Not a good message if you want to change the systematic contingencies, is it?
I return to my essential point. Scientists know what making up data is and that they should not do it. Education on the most obvious cases is not the answer. Scientists slide into data fakery and the like because of the contingencies of their careers.
Obviously, this is not to take all blame away from the individual. After all, there are plenty of scientists laboring under the same contingencies who do not fake data and do not tolerate* those who do. But if we are talking about science in general or our scientific subfields....personal culpability is not the point. We want to create a structure of our industry that leads to more confidence in the output. Witch-hunting individuals is just not going to have as much impact as changing the contingencies.
This is behaviorism. The OldeSkoole approach in experimental psychology that was essentially uninterested in fuzzy psychodynamic concepts of behavioral motivators which were popular at the turn of the last century. Behaviorism focused on the knowable environmental contingencies affecting an individual that either enhance or decrease the probability or rate of future behaviors. Hypothesize a contingency of interest, systematically change it and observe the effect on behavior. If the contingency didn't seem to change the behavior of interest, move on to something else. Simple.
Well, most every academic trainee gets exposed to at least one session of formal RCR training and I've yet to see the analysis that suggests that all confirmed fakers were somehow the exceptions. Let's move on to something else, eh?
*yeaaah. ever been around a large group in which someone is whispered to be a known faker because they've never had an experiment go wrong and they always come up with support for the hypothesis the PI threw out randomly two months ago at lab meeting?