Different subcultures of science may use certain phrases that send people in other traditions into paroxysms of critique.
Mostly it is because such phrasing can sound like bad science. As if the person using it doesn't understand how dangerous and horrible their thinking is.
We've gone a few rounds over storyboarding and representative images in the past.
Today's topic is "n-up", which is deployed, I surmise, after examining a few results, replicates or subjects that look promising for what the lab would prefer to be so. It raises my hackles. It smells to me like a recipe for confirmation bias and false alarming. To me.
Apparently this is normal phrasing for other people and merely indicates the pilot study is complete?
When you search for papers on PubMed, it usually gives the results in chronological order so many new but irrelevant papers are on the top. When you search papers on Google Scholar, it usually gives results ranked by citations, so will miss the newest exciting finding. Students in my lab recently made a very simple but useful tool Gnosis. It ranks all the PubMed hits by (Impact Factor of the journal + Year), so you get the newest and most important papers first.
Emphasis added, as if I need to. You see, relevant and important papers are indexed by the journal impact factor. Of course.
One still occasionally gets whinging from some corner or other about not being able to run Analysis of Variance statistical procedures (ANOVA) because the data didn't pass a test of normality. I.e., a test of whether they appear to fit a normal distribution.
Paper reviewers, trainees, colleagues....this can come from any corner. It betrays a grad-school class level of understanding of what statistical analysis of data is supposed to do...but not a grasp of what it is doing for us at a fundamental level within the conduct of science.
Your stock response should be "the ANOVA is robust against violations of normality, move along".
I note that the company GraphPad, which makes the Prism statistical/curve fitting package beloved of behavioral pharmacologists, has a tidy FAQ answer.
The extract version:
A population has a distribution that may be Gaussian or not. A sample of data cannot be Gaussian or not Gaussian. That term can only apply to the entire population of values from which the data were sampled...In almost all cases, we can be sure that the data were not sampled from an ideal Gaussian distribution... an ideal Gaussian distribution includes some very low negative numbers and some superhigh positive values...When collecting data, there are constraints on the possible values...Other variables can...have physical or physiological limits that don’t allow super large values... plenty of simulations have shown that these tests work well even when the population is only approximately Gaussian...It is hard to define what "close enough" means, and the normality tests were not designed with this in mind.
I have a trainee running a study in which she is examining the effects of methamphetamine on Bunny Hopping using the established open field to hedgerow assay. The primary dependent variable is escape latency from stimulus onset to crossing the plane of the hedge.
She is examining the effects of a locomotor stimulant dose of methamphetamine derived from her pilot dose-response study versus vehicle in groups of Bunnies which have been trained for six weeks in our BunnyConditioning Model and age matched sedentary Bunnies. (The conditioning training consists of various sprint, long run, horizonal hop and vertical leap modules.)
So we have four groups of Bunnies as follows:
1. Conditioned, Vehicle
2. Conditioned, Meth
3. Sedentary, Vehicle
4. Sedentry, Meth
The trainee is actually a collaborating trainee and so these data involve the analytic input of multiple PIs in addition to the trainee's opinio. We are having a slight disagreement over the proper analysis technique so I thought I would turn to the brilliant DM readers.
I stumbled back onto something I've been meaning to get to. It touches on both the ethical use of animals in research, the oversight process for animal research and the way we think about scientific inference.
Now, as has been discussed here and there in the animal use discussions, one of the central tenets of the review process is that scientists attempt to reduce the number of animals wherever possible. Meaning without compromising the scientific outcome, the minimum number of subjects required should be used. No more.
run more subjects..
We accept as more or less a bedrock that if a result meets the appropriate statistical test to the standard p < 0.05. Meaning that sampling the set of numbers that you have sampled 100 times from the same underlying population, fewer than five times will you get the result you did by chance. From which you conclude it is likely that the populations are in fact different.
There is an unfortunate tendency in science, however, to believe that if your statistical test returns p < 0.01 that this result is better. Somehow more significant, more reliable or more..real. On the part of the experimenter, on the part of his supervising lab head, on the part of paper reviewers and on the part of readers. Particularly the journal club variety.
So you've just completed your last assays on physioprofitin signaling in the Namnezian complex. Lo and behold it is qaz-mediated, just like you suspected and the beccans are off the freaking chart. woot! PiT/PlS ratios are within relevant physiological ranges and still this work of art, your labor of love, came through with the experimental goods.
With a hope and a prayer you run your stats....and YES! p < 0.01!!!!!!!!!!!!!!!
What is the correct way to report your big Result?
In case you have been living under a rock (yet inexplicably reading this blog) Usain Bolt has now run both the 100 meter and 200 meter track events faster than anyone ever. The margin of improvement in the 100 meter event (which occurred earlier) was sufficient to start the sports world abuzz. Naturally, sports fans are willing to talk endlessly about the most absurd minutia and implications of such an event in terms pedestrian and embarrassingly overwrought.
YHN is no different.
The title is a question that is most frequently asked by a parent or close relative of an individual (typically male) who is in the early adult years. Said parent is clearly distressed by the career choices made/not made by their son, grandson or nephew who (they have finally acknowledged to themselves) smokes a lot of dope. Has for years and shows no signs of quitting.
The pot smoker has, of course, turned out to be a disappointment to their relatives in one way or another, typically vocationally. And they ask me, almost pleadingly, frequently with a tinge of self-flagellation, to confirm their suspicion that the pot smoking is at the root of junior's lack of gumption.
I have to tell them that nobody can satisfactorily answer this question for them. Not me, not science. Not with any confidence of certainty, anyway.