The fun issue is as summarized in the recent post:
Student [commenter on original post] is exactly right that I have been a coauthor on papers using methods or reporting standards that I now publicly claim to be inappropriate. S/he is also right that my career has benefited substantially from papers published in high profile journals prior using these methods that I now claim to inappropriate. ... I am in agreement that some of my papers in the past used methods or standards that we would now find problematic...I also appreciate Student's frustration with the fact that someone like myself can become prominent doing studies that are seemingly lacking according to today's standards, but then criticize the field for doing the same thing.
I made a few comments on the Twitts to the effect that this is starting to smell of odious ladder pulling behavior.
One key point from the original post:
I would note that points 2-4 were basically standard practice in fMRI analysis 10 years ago (and still crop up fairly often today).
And now let us review the original critiques to which he is referring:
- There was no dyslexic control group; thus, we don't know whether any improvements over time were specific to the treatment, or would have occurred with a control treatment or even without any treatment.
- The brain imaging data were thresholded using an uncorrected threshold.
- One of the main conclusions (the "normalization" of activation following training") is not supported by the necessary interaction statistic, but rather by a visual comparison of maps.
- The correlation between changes in language scores and activation was reported for only one of the many measures, and it appeared to have been driven by outliers.
As I have mentioned on more than one occasion I am one that finds value in the humblest papers and in the single reported experiment. Often times it is such tiny, tiny threads of evidence that helps our science and the absence of any information on something whatever that hinders us.
I find myself mostly able to determine whether the proper controls were used. More importantly, I find myself more swayed by the strength of the data and the experiment presented than I am by the claims made in the Abstract or Discussion about the meaning of the reported work. I'd rather be in a state of "huh, maybe this thing might be true (or false), pending these additional controls that need to be done" then a state of "dammit, why is there no information whatsoever on this thing I want to know about right now".
Yes, absolutely, I think that there are scientific standards that should be generally adhered to. I think the PSY105: Experimental Design (or similar) principles regarding the perfect experiment should be taken seriously....as aspirations.
But I think the notion that you "can't publish that" because of some failure to attain the Gold Plated Aspiration of experimental design is stupid and harmful to science as a hard and fast rule. Everything, but everything, should be reviewed by the peers considering a manuscript for publication intelligently and thoughtfully. In essence, taken on it's merits. This is much as I take any published data on their own merits when deciding what I think they mean.
This is particularly the case when we start to think about the implications for career arcs and the limited resources that affect our business.
It is axiomatic that not everyone has the same interests, approaches and contingencies that affect their publication practices. This is a good thing, btw. In diversity there is strength. We've talked most recently around these parts about LPU incrementalism versus complete stories. We've talked about rapid vertical ascent versus riff-raff. Open Science Eleventy versus normal people. The GlamHounds versus small town grocers. ...and we almost invariably start in on how subfields differ in any of these discussions. etc.
Threaded through many of these conversations is the notion of gate keeping. Of defining who gets to play in the sandbox on the basis of certain standards for how they conduct their science. What tools they use. What problems they address. What journals are likely to take their work for publication.
The gates control the entry to paper publication, job appointment and grant funding, among other things. You know, really frickin important stuff.
Which means, in my not at all humble opinion, that we should think pretty hard about our behavior when it touches on this gate keeping.
We need to be very clear on when our jihadist "rules" for how science needs to be done affect right from wrong versus mere personal preference.
I do agree that we want to keep the flagrantly wrong out of the scientific record. Perhaps this is the issue with the triggering post on fMRI but the admission that these practices still continue casts some doubt in my mind. It seems more like a personal preference. Or a jihad.
I do not agree that we need to put in strong controls so that all of science adheres to our personal preferences. Particularly when our personal preferences are for laziness and reflect our unwillingness to synthesize multiple papers or to think hard about the nature of the evidence behind the Abstract's claim. Even more so when our personal preferences really are coming from a desire to winnow a competitive field and make our own lives easier by keeping out the riff raff.