Search Results for "iacuc"

Mar 25 2010

IACUC 101: Satisfying the erroneous inference by eyeball technique

I stumbled back onto something I've been meaning to get to. It touches on both the ethical use of animals in research, the oversight process for animal research and the way we think about scientific inference.

 

Now, as has been discussed here and there in the animal use discussions, one of the central tenets of the review process is that scientists attempt to reduce the number of animals wherever possible. Meaning without compromising the scientific outcome, the minimum number of subjects required should be used. No more.

physioprofitinErrBars-1.jpg

run more subjects..

We accept as more or less a bedrock that if a result meets the appropriate statistical test to the standard p < 0.05. Meaning that sampling the set of numbers that you have sampled 100 times from the same underlying population, fewer than five times will you get the result you did by chance. From which you conclude it is likely that the populations are in fact different.

 

There is an unfortunate tendency in science, however, to believe that if your statistical test returns p < 0.01 that this result is better. Somehow more significant, more reliable or more..real. On the part of the experimenter, on the part of his supervising lab head, on the part of paper reviewers and on the part of readers. Particularly the journal club variety.

False.

Continue Reading »

48 responses so far

Apr 21 2009

Virtual IACUC: Reduction vs. Refinement

Published by under Animals in Research

A comment on a previous post alleged that the scientific enterprise has not taken the 3Rs (Reduction, Replacement and Refinement) seriously, leading to a failure to reduce the number of animals used in research. Subsequent comments from Paul Browne and Luigi provided links to actual data which refute this claim, however it remains an interesting question to explore.
One of the thornier problems in thinking about the justification of using animals is when two or more laudable goals call for opposing solutions. For today's edition of virtual IACUC we will consider what to do when Refinement calls for the use of more animals, in obvious conflict with Reduction.

Continue Reading »

11 responses so far

Aug 10 2008

Animals in Research: IACUC Oversight

In the last post I introduced the concept that the use of animals for research purposes in the US is a highly regulated activity involving both Federal law (the Animal Welfare Act; AWA) and Federal regulation (the Animal and Plant Health Inspection Service of the US Department of Agriculture; APHIS/USDA). I also introduced the concept of the local research institution's responsibility to convene an Institutional Animal Care and Use Committee (IACUC), and the AWA mandated inclusion of a veterinarian and a member of the public. This post will focus on some of the duties of the IACUC.
[Update 8/11/08: Isis the Scientist on the ongoing process of animal-use oversight]

Continue Reading »

11 responses so far

Sep 12 2016

Does it matter how the data are collected?

Commenter jmz4 made a fascinating comment on a prior post:


It is not the journals responsibility to mete out retractions as a form of punishment(&). Only someone that buys into papers as career accolades would accept that. The journal is there to disseminate accurate scientific information. If the journal has evidence that, despite the complaint, this information is accurate,(%) then it *absolutely* should take that into account when deciding to keep a paper out there.

(&) Otherwise we would retract papers from leches and embezzlers. We don't.

That prior post was focused on data fraud, but this set of comments suggest something a little broader.

I.e., that fact are facts and it doesn't matter how we have obtained them.

This, of course, brings up the little nagging matter of the treatment of research subjects. As you are mostly aware, Dear Readers, the conduct of biomedical experimentation that involves human or nonhuman animal subjects requires an approval process. Boards of people external to the immediate interests of the laboratory in question must review research protocols in advance and approve the use of human (Institutional Review Board; IRB) or nonhuman animal (Institutional Animal Care and Use Committee; IACUC) subjects.

The vast majority (ok, all) journals of my acquaintance require authors to assert that they have indeed conducted their research under approvals provided by IRB or IACUC as appropriate.

So what happens when and if it is determined that experiments have been conducted outside of IRB or IACUC approval?

The position expressed by jmz4 is that it shouldn't matter. The facts are as they are, the data have been collected so too bad, nothing to be done here. We may tut-tut quietly but the papers should not be retracted.

I say this is outrageous and nonsense. Of course we should apply punitive sanctions, including retracting the paper in question, if anyone is caught trying to publish research that was not collected under proper ethical approvals and procedures.

In making this decision, the evidence for whether the conclusions are likely to be correct or incorrect plays no role. The journal should retract the paper to remove the rewards and motivations for operating outside of the rules. Absolutely. Publishers are an integral part of the integrity of science.

The idea that journals are just there to report the facts as they become known is dangerous and wrong.

__
Additional Reading: The whole board of Sweden's top-ranked university was just sacked because of the Macchiarini scandal

13 responses so far

Jan 04 2016

Dissection of sleazy, dishonest AR shillery posing as journalism

Notice what I did there? Setting a bias right from the start with click-bait headlining?

Well, that is just how a buzzfeed piece entitled "The Silent Monkey Victims Of The War On Terror" starts.

"Victims".

I called this piece out for being sleazy and dishonest in a tweet and the author, one Peter Aldhous, Buzzfeed News Reporter, took exception. He emailed me asking how I could possibly accuse him of being a shill for the AR agenda, asserting he has no allegiance whatsoever to animal rights and complaining about how someone as allegedly influential as me could damage his professional reputation.

So I felt I owed him an explanation.

First, I make no apology for my distaste for AR adherents. They are terrorists, yes, terrorists, and they inhabit a nihilist, anti-social ideology. Of terrorism.

Second, I've written a few posts about the use of animals in research (see below for Additional Reading). There is a pretty good dose of information at Speaking of Research as well. I mention this not so much to draw specifics as to show that there is information available on the web, readily searchable, for a journalist to quickly find for an alternative viewpoint to the AR nonsense. That is, if they are interested in researching a story. I'll also point out that the Science Editor at Buzzfeed is someone who spent years pounding the floors at the annual meeting of the Society for Neuroscience and has even written on the use of nonhuman primates in autism models. Again, the point is that this journalist has a route to further education and balance, if he had only chosen to use it. The piece does not reflect any such background, in my reading of it.

What I want to dissect today, however, is the way this piece by Aldhous is carefully crafted to attack nonhuman primate research, as opposed to providing a reasonable discussion of the use of animals in specific research.

The article starts with "victims" and has chosen to describe this as resulting from "The War on Terror". Right away, we see a sleazy link between something that many Americans oppose, i.e., the description of the Bush agenda as a war on terror and the Bush agenda itself, and the use of animals in research. It is a typical tactic of the AR position. If you can establish that one area of research is unneeded in the eyes of your audience then you are three quarters of the way home.

And this is AR thinking, make no bones about it. Why? Follow the logic. There are sizable swaths of Americans who disagree that we should spend public money investigating any number of health conditions. From infectious disease like HIV (although see this) to obesity to diabetes to depression to substance abuse. Simply because they do not agree that these are topics that are worth of investigation. Anthrax, botulism and nerve gas are no different in this respect. Some people feel that the war on terror is overblown, the risks of a bio or chemo weapon attack are small and we should not put any public money into this topic whatsoever- from research to law enforcement.

So if you argue that your particular agenda should rule the day when it comes to research, you are saying that everyone's agenda in a pluralistic democratic society should rule the day. This leaves us with very little science conducted and certainly no animal science. This is why I call this a bit of AR shillery. The logic leads to no animal research on any health topic.

Note, it is fine to hold that belief in pluralistic democratic society but let us be honest about what you are about, eh? And sure, I can see that there would be some agenda so narrowly focused, so out of the mainstream that we cannot reasonable credit as being a legitimate concern of the American people. It should be self-evident from the support for the Bush administration's war on terror (and our public discussion over bioterror) that this is not the case here.

Ok. But what about the converse? Is just any use of animals in research okay then? No, no it is not. Certainly, we have a cascade of federal law, federal regulation and widely adopted guidelines of behavior. We have rules against unnecessary duplication. We wrangle, sometimes at long length, over reduction and refinement of the research that uses animals. Even an apparent exemption from the full weight of the Animal Welfare Act for certain experimental species doesn't really exempt them from oversight.

Getting back to the article, it next pursues two themes including the idea that there are a "lot" of monkeys being used and that they are all "suffering" and in pain. The article includes this pull quote:

“Wow, that’s a lot of monkeys,” said Joanne Zurlo of the Johns Hopkins Bloomberg School of Public Health, who studies alternatives to animal experimentation. “It’s quite disturbing.”

It is? How do we know this? How are we to evaluate this with any sort of context? How is it "disturbing" unless we have already decided we are against the use of monkeys in this, or for that matter any, research?

The piece brags about some exclusive review Buzzfeed has conducted to examine the publicly-available documents showing about 800 nonhuman primates used in "Column E" (the most painful/distressful category) US research in the 1999-2006 interval and a jump to about 1400 in the 2009-2014 interval.

Speaking of Research maintains an animal-use statistics page. The US page shows that of the species not exempted from tracking by the Helms Amendment, non-human primates account for 7% of the total (all categories of research use) in 2014. This is 57,735 individuals- note that given that non-human primates can be used for years if not decades in some kinds of research, this does not equate to a per-year number the way it would for a species that only lives 2 years like a rat. But at any rate, the 600 extra [ETA: "E" category monkeys] that the Buzzfeed piece seems to be charging to the war on terror is only a 1% increase in the annual "E" use of non-human primates.

This is "disturbing"? Again, I think this alone shows how disingenuous the piece really was. A "one percent increase in the use of monkeys for bioweapon research" doesn't really have the same punch, does it?

What about other frames of reference? From the Speaking of Research page:

Scientists in the US use approximately 12-25 million animals in research, of which only less than 1 million are not rats, mice, birds or fish. We use fewer animals in research than the number of ducks eaten per year in this country. We consume over 1800 times the number of pigs than the number used in research. We eat over 340 chickens for each animal used in a research facility, and almost 9,000 chickens for every animal used in research covered by the Animal Welfare Act. For every animal used in research, it is estimated that 14 more are killed on our roads.

Or what about the fact that Malaysia culled 97,119 macaque monkeys (long-tailed, i.e. M. fascicularis and pig-tailed, i.e. M. nemestrina; common research lab species) in 2013. Culled. That means killed, by rough means (by the reporting) without any humane control of pain or suffering. No use for them, no scientific advances, no increase in knowledge...probably not even used for food. Just.....killed. 167 times the number scored as used in bioweapons research were just eliminated in a single year in a single country.

Failing to provide these contexts, and writing a piece that is majorly focused on the number of research monkeys used for bioweapons studies is dishonest, in my view.

Okay, so what about the pain and suffering part of the piece? Well, Aldhous writes:

BuzzFeed News has calculated the number of primates used each year for what the USDA calls “Column E” experiments, in which animals experience pain or distress that is not fully alleviated with painkillers, tranquilizers, or other drugs. Because monkeys are emotionally complex creatures that are thought to experience suffering similarly to how we do, such experiments are especially controversial.

The number of primates used in these ethically fraught experiments

Notice the slant? First of all, human introspection about the "pain and suffering" of nonhumans is suspect, to say the least. Yes, including monkeys, dolphins or whathaveyou. The statement about monkeys being "emotionally complex creatures" is pure AR theology. The idea that nonhuman suffering is identical to human suffering is entirely unproven and there are large numbers of people who disagree with this characterization (see the Malaysian culling, above, for an example). If you try to get people to define terms and provide evidence you devolve into really bad eye-of-the-beholder anecdata on the one hand up against a profound lack of evidence on the other. Humans are demonstrably different from all other species we know to date. And efforts to view nonhumans as "like us" invariably involve some very convenient definitions, goal post moving, blindness to the quality or universality or ease of the human trait, etc.

Calling it "especially controversial" and "ethically fraught" is hardly even handed journalism. Where is the balance here? The people who shout loudest about the use of monkeys being "controversial" don't believe in any animal research. Seriously, probe them. What use of animals isn't ethically fraught? Hammering this idea over and over throughout the piece is poisoning the well. It is acting like this is established fact that everyone agrees with. Not so. And the slant of these terms is certainly on the side of "this research is bad". You use other terms when you want to describe a neutral disagreement of sides.

One very important point is the lie of the truncated distribution. We know perfectly well that there is a big part of the American distribution that is essentially unconcerned about animal use and animal suffering. If you know anyone who uses sticky traps to deal with unwanted household rodents...they are doing Category E research. Catch and release fishing? Ditto. People who own large dogs in city apartments and walk them just twice a day....well it isn't Category E but it sure doesn't sound humane to me. The point is that research and researchers do not operate in this part of the distribution. They operate in the well-regulated part of the distribution that is explicitly concerned with the welfare of animal subjects in research. Notice all the pull quotes he included from researchers seem to express caution? Obviously I can't know how selected and cherry picked those comments were (I suspect very) but they do testify to the type of caution expressed by most, if not all, animal researchers. We are always looking to reduce and refine. And look, individual scientists may view different research priorities differently...but it is hardly fair to only present the skeptics. Where are the full throated defenders of the bioweapons research in this article? Well, they wouldn't talk on record* due in very large part, I assert, to a well-informed skepticism that journalists ever care to be balanced on these topics.

The Aldhous piece goes on to a very sleazy sleight of hand by mentioning a violation report in which an animal research facility was cited for failing to follow care protocols. He picks out three institutions:

three institutes have dominated the most ethically contentious primate experiments: the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) at Fort Detrick, Maryland, the Lovelace Respiratory Research Institute in Albuquerque, New Mexico, and the Battelle Memorial Institute in Columbus, Ohio.

Since 2002, these three institutions have collectively used more than 6,400 Column E primates. In 2014, they accounted for almost two-thirds of the monkeys used in these experiments.

Again with the "most ethically contentious" charge. Nice. But the point of this is...what? Many bioweapons pathogens can only be studied at very high cost isolation facilities. It is good and right that there are not many of them and that they account for the majority of the animal use. It is also good and right that they are subject to regulatory oversight in case any slip ups need to be corrected, yes?

After a routine inspection in March, Lovelace was cited for failing to provide monkeys with the care that was supposed to be delivered — including intravenous fluids, Tylenol for fever, and antidiarrheal drugs.

The report shows that three animals did not receive Tylenol when they should have, and three did not receive anti-diarrheals when they should have, for 2-4 days of symptoms. There were 57 animals in a Cohort that did not receive injections of fluids but there is no indication that this resulted in any additional pain or distress and we can't even tell from this brief protocol language whether this was supposed to be as-need-per-veterinarian-recommendation or not. There are two additional Cohorts mentioned for which it is noted the animals were treated according to protocol and the table in the Aldhous piece lists 431 animals used at Lovelace in 2014, probably the year for which the above citation refers. Naturally, Aldhous fails to mention these citation numbers leaving the reader free to assume the worst. This is classic misdirection and smearing at work. Which is why I call it dishonest. "Loose stools or fever for 2 to 4 days in less than 1% of individuals" sounds more like an over the counter medication warning or a threshold for when to finally call the doctor to the average ear.

Aldhous next diverts into a fairly decent discussion of how animal models may or may not fully predict human outcome but I think that in the context, and with his shading, it falls short of the mark. I'm not going to step through all of his examples because there are certain fundamental truths about research.

1) If we knew the result in advance, the experiments wouldn't be necessary. So if we sometimes find out that animal models are limited, we only come to this conclusion in the doing. There is no way to short circuit this process.

2) We use animals, even monkeys, as proxies and models. Sometimes, they are going to come up short of full prediction for human health. This does not mean they are not valuable to use as models. Again, see 1. We only find this out in the doing and most research is novel territory.

3) The overlap between animal testing and research is fuzzy in this discussion. If you want to evaluate medications, your research may not be dedicated to, or idealized for, novel discovery about the disease process itself. This doesn't make it less valuable. Both have purposes.

4) It is dishonest to point to places where animal research failed to predict some adverse outcome of a medication in humans without discussing the many-X more potential medications that were screened out with animal models. Protection from harm is just as important, maybe more so, than identification of a helpful medication, is it not?

So as you can see, I think this piece in Buzzfeed is written from start to finish to advance the AR agenda. It is not by any means fair or balanced. This is relatively common with journalism but that is no excuse. It is sleazy. It is dishonest. There is every reason to expect that balanced information and opinion is readily available to a journalist, even one who has no scientific background whatever.

I do not know the heart and mind of the author and as I mentioned at the outset, he protested vehemently that my take was not his intent. Which is why I have tried to focus on the piece and what was included and written. I will suggest that if Aldhous is sincere, he will read what I have written here, follow the links and take a very hard editor's look at what he has written and the impact it has on the average reader.

__
*I don't know the solution to this problem. A piece like this one Aldhous wrote is the type of thing that hardens attitudes. Which makes it harder for the balanced story to get out. It's a vicious cycle and I have no idea how to break it until and unless science journalists stop with this sleazy and biased AR shillery on their own.

Additional Reading

Logothetis driven out of monkey research

UCLA scientists have been under attack for over a decade

Repost: Insightful Animal Behavior: A "Sufficiently Advanced Technology"

Dolphins ain't all that either

33 responses so far

Nov 10 2015

Ask DrugMonkey: JIT and Progress Reports

Two quick things:

Your NIH grant Progress Report goes to Program. Your PO. It does not go to any SRO or study section members, not even for your competing renewal application. It is for the consumption of the IC that funded your grant. It forms the non-competing application for your next interval of support that has already passed competitive review muster.

Second. The eRA commons automailbot sends out requests for your JIT (Just In Time; Other Support page, IRB/IACUC approvals) information within weeks of your grant receiving a score. The precise cutoff for this autobot request is unclear to me and it may vary by IC or by mechanism for all I know. The point is, that it is incredibly generous. Meaning that when you look at your score and think "that is a no-way-it-will-ever-fund score" and still get the JIT autobot request, this doesn't mean you are wrong. It means the autobot was set to email you at a very generous threshold.

JIT information is also requested by the Grants Management Specialist when he/she is working on preparing your award, post-Council. DEFINITELY respond to this request.

The only advantage I see to the autobot request is that if you need to finalize anything with your IRB or IACUC this gives you time. By the time the GMS requests it, you are probably going to be delaying your award if you do not have IRB/IACUC approval in hand. If you submit your Other Support page with the autobot request, you are just going to have to update it anyway after Council.

15 responses so far

Oct 14 2014

The CrowdFund Science Crowd Mistakes "An Experiment" for "Doing Science"

I had a revelation that clarified some of my points of poor understanding of the science crowdfunding enthusiast position.

In skirmishing on Twitter with some guy associated with "Experiment.com" I ran across a project on brain inflammatory responses in a fetal alcohol model from the Lindquist lab. Something I can readily assess, being that it is a substance abuse, drug-toxicity investigation in rats.
Continue Reading »

33 responses so far

Oct 01 2014

Sometimes CSR just kills me

The Peer Review Notes for September 2014 contains a list of things you should never write when reviewing grants.

Some of them are what we might refer to as Stock Critique type of statements. Meaning that they don't just appear occasionally during review. They are seen constantly. A case in point:

7. “This R21 application does not have pilot data, which should be provided to ensure the success of the project.”

Which CSR answers with:

R21s are exploratory projects to collect pilot data. Preliminary data are not required, although they can be evaluated if provided.

What kind of namby-pamby response is this? They know that the problem with R21s is that reviewers insist they should have preliminary data or, at the least only give good scores to the applications that have strong preliminary data. They bother to put this up on their monthly notes but do NOTHING that will have any effect. Here's my proposed response: "We have noticed reviewers simply cannot refrain from prioritizing preliminary data on R21s so we will be forbiding applicants from including it". Feel free to borrow that, Dr. Nakamura.

Another one:

“This is a fishing expedition.”

CSR:

It would be better if you said the research plan is exploratory in nature, which may be a great thing to do if there are compelling reasons to explore a specific area. Well-designed exploratory or discovery research can provide a wealth of knowledge.

This is another area of classic stock criticism of the type that may, depending on your viewpoint, interfere with getting the desired result. As indicated by the answer, CSR (and therefore NIH) disagrees with this anti-discovery criticism as a general position. Given how prevalent it is, again, I'd like to see something stronger here instead of an anemic little tut-tut.

One of these is really good and a key reminder.

“The human subject protection section does not spell out the specifics, but they already got the IRB approval, and therefore, it is ok.”

Response:


IRB approval is not required at this stage, and it should not be considered to replace evaluation of the protection plans.

And we can put IACUC in there too. Absolutely. There is a two tiered process here which should be independent. Grant reviewers take a whack at the proposed subject protections and then the local IACUC takes a whack at the protocol associated with any funded research activities. It should be a semi-independent process in which neither assumes that the approval from the other side of the review relieves it of responsibility.

Another one is a little odd and may need some discussion.

“This application is not in my area of expertise . . . “

I find that reviewers say this in discussion but have never seen it in a written critique (even during read phase before SRO edits eliminate such statements).

The response is not incorrect...

If you’re assigned an application you feel uncomfortable reviewing, you should tell your Scientific Review Officer as soon as possible before the meeting.

...but I think there is wiggle room here. Sometimes, reviewers are specifying that they are only addressing the application in a particular way. This is OKAY! In my experience it is rare that a given application has three reviewers who are stone cold experts in every single aspect of the proposal. The idea is that they can be primary experts in some part or another. And, interestingly given the recent statements we discussed from Dr. McKnight, it is also okay if someone is going at the application from a generalist perspective as well. So I think for the most part reviewers say this sort of thing as a preamble to boxing off their areas of expertise. Which is important for the other panel members who were not assigned to the application to understand.

4 responses so far

Jul 24 2013

On Internalizing the Ethical Standards of Scientific Research

Published by under Academics,Science Ethics

There is an entry up on the Scientific American Blog Network's Guest Blog by two of the principals of μBiome. In Crowdfunding and IRBs: The Case of uBiome Jessica Richman and Zachary Apte address prior criticism of their approach to the treatment of human subjects. In particular, the criticism over their failure to obtain approval from an Institutional Review Board (IRB) prior to enrolling subjects in their study.

In February, there were several posts about the ethics of this choice from a variety of bloggers. (See links from Boundary Layer Physiology (here, here, here) Comradde Physioprof (here, here, here), Drugmonkey (here), Janet Stemwedel (here), Peter Lipson (here).) We greatly appreciate the comments, suggestions and criticisms that were made. Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach.

If you follow the linked blog posts, you will find that when Richman and/or Apte engaged with the arguments, they took a wounded tone. This is a stance they continue.

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I was one of the ones who brought up the Tuskegee Syphilis Experiment. Naturally, this was by way of making an illustrative example of why we have modern oversight of research experiments. I did not anticipate that any of the research planned by the uBiome folks would border on this sort of horrible mistreatment of research subjects. Not at all. And mentioning that older history does not so accuse them either.

PhysioProf made this point very well.

UPDATE 2: The need for IRB review has little to do with researchers’ intentions to behave ethically–nowadays it is rare that we are talking about genuinely evil exploitative abusive shitte–but rather that it is surprisingly complicated to actually implement processes, procedures, and protocls that thoroughly safeguard human subjects’ rights and safety, even with the best of intentions. This inquiry has absolutely nothing to so with whether the uBiome people are nice guys who just want to do cool science with the best of intentions. That is irrelevant.

IRBs are there exactly to ensure that earnest scientists with the best of intentions in their hearts are forced to think through all of the possible ramifications of their proposed human subjects research projects in a thorough and systematic manner before they embark on their research. The evidence we are in possession of as of now suggests strongly that uBiome has not done so.

This is a critical reason why scientists using human or animal subjects need to adhere to the oversight/protection mechanisms. The second critical reason is that the people doing the research are biased. Again, it is not the case that one thinks all scientists are setting out to do horrible Mengele type stuff in pursuit of their obsessions. No. It is that we all are subject to subtle influences on our thinking. And we, as humans, have a very well documented propensity to see things our own way, so to speak. Even when we think we are being totally objective and/or professional. By the very nature of this, we are unable to see for ourselves where we are going astray.

Thus, external oversight and review provides a needed check on our own inevitable bias.

We can all grumble about our battles with IRBs (and Institutional Animal Care and Use Committees for animal subject research). The process is far from perfect so a little bit of criticism is to be expected.

Nevertheless I argue that we should all embrace the oversight process unreservedly and enthusiastically. We should be proud, in fact, that we conduct our research under such professional rules. And we should not operate grudgingly, ever seeking to evade or bypass the IRB/IACUC process.

Richman and Apte of μBiome need to take this final step in understanding. They are not quite there yet:

Before we started our crowdfunding campaign, we consulted with our advisors at QB3, the startup incubator at UCSF, and the lawyers they provided us. We were informed (correctly) that IRBs are only required for federally funded projects, clinical trials, and those who seek publication in peer-reviewed journals. That’s right — projects that don’t want federal money, FDA approval, or to publish in traditional journals require no ethical review at all as far as we know.

Well, that is just plain wrong. Being a professional scientist is what "requires" us to seek oversight of our experiments. I believe I've used the example in the past of someone like me buying a few operant chambers out of University Surplus, setting them up in my garage and buying some rats from the local pet store. I could do this. I could do this without violating any laws. I could dose them* with all sorts of legally-obtainable substances, very likely. Sure, no legitimate journal would take my manuscript but heck, aren't we in an era where the open access wackaloons are advocating self-publishing everything on blogs? I could do that. Or, more perniciously, this could be my little pilot study incubator. Once I figured I was on to something, then I could put the protocols through my local IACUC and do the "real" study and nobody would be the wiser.

Nobody except me, that is. And this is why such a thing is never going to happen. Because I know it is a violation of my professional obligations as I see them.

Back to Richman and Apte's excuse making:

Although we are incubated in the UCSF QB3 Garage, we were told that we could not use UCSF’s IRB process and that we would have to pay thousands of dollars for an external IRB. We didn’t think it made sense (and in fact, we had no money) to pay thousands of dollars on the off chance that our crowdfunding campaign was a success.

and whining

We are happy to say that we have completed IRB review and that our protocol has been approved. The process was extremely time-consuming, and expensive. We went back and forth for months to finally receive approval, exchanging literally hundreds of pages of documents. We spent hundreds of hours on the project.

First, whatever the UCSF QB3 Garage is, it was screwing up if it never considered such issues. Second, crying poverty is no excuse. None whatsoever. Do we really have to examine how many evils could be covered under "we couldn't afford it"? Admittedly, this is a problem for this whole idea of crowd-funded science but..so what? Solve it. Just like they** had to solve the mechanisms for soliciting the donations in the first place. Third....yeah. Doing things ethically does require some effort. Just like conducting experiments and raising the funds to support them requires effort. Stop with the whining already!

The authors then go on in a slightly defensive tone about the fact they had to resort to a commercial IRB. I understand this and have heard the criticisms of such Pay-for-IRB-oversight entities. From my perspective this is much, much lesser of a concern. The absolute key is to obtain some oversight that is independent of the research team. That is first-principles stuff to my view. They also attempt to launch a discussion of whether novel approaches to IRB oversight and approvals need to be created to deal with citizen-science and crowd-funded projects. I congratulate them on this and totally agree that it needs to be discussed amongst that community.

What I do not appreciate is their excuse making. Admitting their error and seeking to generate new structures which satisfy the goal of independent oversight for citizen-science in the future is great. But all the prior whinging and excuse making, combined with the hairsplitting over legal requirements, severely undercuts progress. That aspect of their argument is telling their community that the traditional institutional approaches do not apply to them.

This is wrong.

UPDATE: Read uBiome is determined to be a cautionary tale for citizen science over at thebrokenspoke blog.
__
*orally. not sure civilians can get a legal syringe needle anywhere.

**(the global crowdfund 'they')

Additional Reading:

Animals in Research: The conversation begins
Animals in Research: IACUC Oversight

Animals in Research: Guide for the Care and Use of Laboratory Animals

Animals in Research: Mice and Rats and Pigeons...Oh My!
Virtual IACUC: Reduction vs. Refinement
Animals in Research: Unnecessary Duplication

34 responses so far

May 09 2013

Perspective on the use of animals in research

Published by under Animals in Research

We were just discussing the closure of the New England National Primate Research Center. One of the uncertainties about that decision of Harvard Medical School was where the approximately 2,000 animals of various species were to be placed. 2,000. Remember that now. Some of the news reporting also referred to the deaths of some of the NENPRC nonhuman primate subjects as a triggering cause for a two year attempt to improve their procedures. These deaths comprised a series of ones and twos going by the available news reporting...perhaps amounting to a dozen or score of animals[ETA 4, see first comment]? We have no information over what timeframe those deaths occurred.

According to the LA Times, Malaysia has culled 97,000 cynomolgous macaques. Last year. The article claims they culled 88,000 of them in the previous year.

The cynomolgous macaque, btw, is a very commonly used species in research laboratories in the US. From Speaking of Research we learn that in 2010 there were about 73,317 nonhuman primates used in 2010. Of all species. This is out of an estimated 25 million vertebrate animals used in research for that year. And remember, for the longer-lived species such as dogs or nonhuman primates, the vast majority of studies use them across multi-year and even multi-decade intervals. So across time the comparison of the yearly use of, say dogs versus mice, tends to overestimate the dogs on a per-individual basis.

We are incredibly parsimonious with the approximately 1% of animals used in research that are cats, dogs or monkeys, the ones usually of greatest concern to the average person. Parsimonious with all of them, in reality. As the Speaking of Research page puts it.

Let us put the number of animals used in perspective. Scientists in the US use approximately 26 million animals in research, of which only around 1 million are not rats/mice/birds/fish. We use fewer animals in research than the number of ducks eaten per year in this country. We consume over 1800 times the number of pigs than the number used in research. We eat over 340 chickens for each animal used in a research facility, and almost 9,000 chickens for every animal used in research covered by the Animal Welfare Act. For every animal used in research, it is estimated that 14 more are killed on our roads.

Malaysia just euthanized (how we don't know but I'm pretty sure IACUC oversight wasn't involved) 185,000 monkeys in the past two years. Why?

Tourists and many Malaysians gather near jungle edges to watch the monkeys, snap photos of them and feed them peanuts and bananas. But the wildlife department, also known as Perhilitan, says the extensive culling was necessary to curb a “pest species” that breeds prolifically, adapts with ease, and ransacks homes for food.

“It is a hard decision, but in order to safeguard the well-being of people and to maintain a stable macaques population … it might be the best option in a short run,” the department said in an email to The Times.

185,000 culled as annoying pests over two years. A problem that may or may not have been increased by tourists. Even if the above stats reflected different individuals in each year, this is about equivalent to the total number of nonhuman primates used in US research laboratories. In fact, the number is likely to be much closer to the annual 73,317 count, given the longevity of the species. And we have no idea when Malaysia will stop culling.

In a related vein the ASPCA says that about 3-4 million companion animals (dogs and cats) are euthanized in shelters in the US annually. Annually. Why? Because nobody wants them. 64,930 dogs and 21,578 cats used in research in 2010. Versus 3-4 million. That's 3,000,000 vs 86,508. Mere inconvenience and irresponsibility versus the advance of knowledge and creation of new medical treatment for humans and animals as well.

The inadvertent deaths of a handful to perhaps a score of monkeys[ETA 4, see first comment] at the NENPRC led to massive shut down of research, a two year reorganization process and ultimately the demise of the Center. The center which made demonstrable advances in AIDS, drug abuse and Parkinson's disease amongst other accomplishments.

At the very least this should give some perspective on how seriously the research enterprise and oversight system takes the humane treatment of research animals. It compares very favorably indeed with how the rest of the world (including the US) treats animals (yes, including companion species).

33 responses so far

Older posts »