Rigor, reproducibility and the good kid

Feb 09 2018 Published by under Grant Review, NIH, NIH funding

I was the good kid.

In my nuclear family, in school and in pre-adult employment.

At one point my spouse was in a very large lab and observed how annoying it is when the PI reads everyone the riot act about the sins of a few lab-jerks.

Good citizens find it weird and off-putting when they feel criticized for the sins of others.

They find it super annoying that their own existing good behavior is not recognized.

And they are enraged when the jerko is celebrated for finally, at last managing to act right for once.

Many of us research scientists feel this way when the NIH explains what they mean by their new initiative to enhance "rigor and reproducibility".


"What? I already do that, so does my entire subfield. Wait.....who doesn't do that?" - average good-kid scientist response to hearing the specifics of the R&R initiative.

9 responses so far

  • eeke says:

    Sometimes irreproducibility has nothing to do with naughty kids. I've failed to reproduce results from very good kids, an annoying and frustrating thing. Usually, it's the case where the good kid didn't disclose enough information about experimental conditions in the methods section, and so the materials and conditions that I used were slightly different and likely caused all the problems. In another case, a not-so-good-kid didn't have a good control group, but the results were published anyway because pedigree child, he can't go wrong. So rigor about that stuff? No complaints from this kid, not something I'd cry about unless it was missing.

  • drugmonkey says:

    From this blog splainer https://nexus.od.nih.gov/all/2016/01/28/scientific-rigor-in-nih-grant-applications/

    A robust approach might include use of appropriate statistical methods, prospective sample size estimation, replicates,

    Me: duh

    Robust and credible results are those obtained with methods specifically designed to avoid bias, such as blinding, randomization, and prospectively defined exclusion/inclusion criteria

    Me: duh

    or how about the things identified here: https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research

    Require that statistics be fully reported in the paper, including the statistical test used, exact value of N, definition of center, dispersion and precision measures

    Me: duh

    Require that investigators report how often each experiment was performed and whether the results were substantiated by repetition under a range of conditions. Sufficient information about sample collection must be provided to distinguish between independent biological data points and technical replicates.

    Me: AYFKMRN?

    Require authors to clearly state the criteria that were used for exclusion of any data or subjects. Include any similar experimental results that were omitted from the reporting for any reason,

    Me: duh

    ....and on and on it goes. Look, we may slip a little now and again and forget to include some detail. Most of the time it is caught in review and the few times it is not, can otherwise be deduced from my ongoing work.

    because Encourage the use of community-based standards is how normal peer review works.

    The R&R push is basically the NIH saying that the "community based standards" of some subsets of science are falling chronically short of the acceptable mark. And all the good kids are being yelled at for the sins of the bad kids.

  • Morgan Price says:

    I think lots of sub fields do have problems (see misidentified cell lines, “specific” antibodies, and clinical trials with shifting “primary” outcomes). It makes me wonder about some communities’ standards. And those areas could be a majority of NIH funding. In the communities without such obvious problems, well — there’s still a lot of errors but I’m not sure R&R will help.

  • qaz says:

    An interesting question is whether the bad kids are getting the rewards (NIH grants, publication in high impact GlamourMags, awards and recognition) more than the good kids. I'm not sure what the data is on that, but following your analogy, we all know that in many of those communities the bad kids often get rewards that the good kids don't.

    The only solution is for communities to create and enforce rules and regulations that support good kids over bad kids. If you are doing all these things, then it should be easy for you to meet them. There's lots of really good data on how to create a community that is robustly resistant to bad behavior coming from the community economics literature (Elinor Ostrom, David Sloan Wilson).

    The problem in my view is that a lot of these solutions are field specific (like what defines your n, or data sharing, or preregistration of studies, or using stats in certain religious ways) that often don't apply to other fields. What's amazing to me is that in my observation, science is remarkably replicable and when it is not, it usually indicates a very interesting and unexpected direction of research to pursue.

  • genomicrepairman says:

    DM, looking at the list on Nexus, I'm left with feeling of, "Great I'll file that under shit I already know and do." WTF is looking at this and experiencing a life-changing alteration in how they do research?

  • drugmonkey says:

    So you see my point, gr.

    It’s like when we get the latest restrictions on filing travel paperwork and it turns out some jackass has been charging $400 bottle of wine to NIH grants. So now it’s all “can you prove this $45 dinner check was just you”?

  • odyssey says:

    A list of do's and don'ts won't change anything.

  • girlparts says:

    Is it a response to actual wrongdoing in the scientific community, or just a response to politicians claiming there is wrongdoing in the scientific community? Although I do think that proposal length limits and an emphasis on significance have pushed out descriptions of these practices from proposals. And we all just assume we all follow these practices. It's a lot clearer in papers.

  • drugmonkey says:

    It’s pharma trying to offload yet more of its research and development costs that is behind all of this, IMO. Which is why I think we should relate how generally wrong the claims are at every chance.

    This relates to what I think is going on here: http://drugmonkey.scientopia.org/2014/07/08/the-most-replicated-finding-in-drug-abuse-science/

Leave a Reply