Archive for the 'Data!!!!' category
We have some great stuff in the hopper.
Immediate data flow is rocking, I get PI crack updates every few days that are fun and fascinating.
Middle term, the projects themselves are on track and doing what they are supposed to be doing, I.e. turning up unexpected leads for more studies. Moar! I say!
Long term we have almost a programmatic effort going on at least two things I've been working on for a couple, three years.
The last is my *job*, of course. That's what the PI is supposed to be doing. Getting the several year plan harmonized into a program of investigation. Lining up the people, of course. And the funding. And wrangling the local institution into doing what you need it to do.
The data-crack is undeniable. It is a powerful and immediate reinforcer. So are new hypotheses and unexpected results that need to be figured out, investigated and nailed down with more data.
But the *programmatic* successes?
I like my job quite a lot today.
This is amazing. Strike that, AMAZING!
A paper published in PLoS ONE by Martin and colleagues examines the fate of R01 applications reviewed in 61 of the 172 standing study sections convened by the Center for Scientific Review of the NIH in a single round (the January 2009 Council one- submitted Jun-Jul 2008 and reviewed in Oct-Nov 2008).
It is going to take me a bit to go through all the data but lets start with Figure 1. This plots the preliminary scores (average of ~3 assigned reviewers) against the final priority score voted by the entire panel.
The first and most obvious feature is the tendency for discussion to make the best scores (lowest in the NIH scoring system) more extreme. I would suggest that this results from two factors. First, reviewers are reluctant (in my experience) to assign the best possible score prior to discussion. I don't understand this personally, but I guess I can grasp the psychology. People have the idea that perfection exists out there in some application and they want to reserve some room so that they can avoid having ever awarded a perfect score to a lesser application. Silly, but whatever. Once discussion starts and everyone is nodding along approvingly it is easier to drift to a more perfect score.
Second, there is a bit of the old "Fund that puppy NOW!" going on. Particularly, I would estimate, for applications that were near misses on a prior version and have come back in review. There can be a tendency to want to over-emphasize to Program staff that the study section found the application to be in the must-fund category.
Martin MR, Kopstein A, Janice JM, 2010 An Analysis of Preliminary and Post-Discussion Priority Scores for Grant Applications Peer Reviewed by the Center for Scientific Review at the NIH. PLoS ONE 5(11): e13526. doi:10.1371/journal.pone.0013526
I have a trainee running a study in which she is examining the effects of methamphetamine on Bunny Hopping using the established open field to hedgerow assay. The primary dependent variable is escape latency from stimulus onset to crossing the plane of the hedge.
She is examining the effects of a locomotor stimulant dose of methamphetamine derived from her pilot dose-response study versus vehicle in groups of Bunnies which have been trained for six weeks in our BunnyConditioning Model and age matched sedentary Bunnies. (The conditioning training consists of various sprint, long run, horizonal hop and vertical leap modules.)
So we have four groups of Bunnies as follows:
1. Conditioned, Vehicle
2. Conditioned, Meth
3. Sedentary, Vehicle
4. Sedentry, Meth
The trainee is actually a collaborating trainee and so these data involve the analytic input of multiple PIs in addition to the trainee's opinio. We are having a slight disagreement over the proper analysis technique so I thought I would turn to the brilliant DM readers.
You know those weeks where every day the data keep getting more and more exciting? Yeah, I'm having one of those times three...w00t!!!!
Odyssey observed recently that the most disposable resource in the laboratory ought to be... the hypothesis.
Well, I'm getting some pretty cool results from one of my projects right now. It required the application of a couple of technologies in combination so it took us awhile to get it running. I probably came up with the hypothesis three years ago, maybe two.
And now, I'm applying the approach we've developed to a slightly different question than originally intended, but close enough for BWAAHAHAHA! purposes. The question is fascinating and more novel anyway so we have a three-fer instead of a two-fer (or something like that).
Trouble is, these fascinating results are questioning the original hypothesis that I've been working toward testing. I have grant proposals written on this stuff!
But you know what? Being possibly wrong on my original hypothesis is no big deal, we'll just follow on from what our current data are telling us. It will still end up someplace that is interesting.
That's the beauty of not being obsessed with your theories and hypotheses. In a lot of ways you are much freer this way. You may not waste as much time driving your pet hypothesis straight through the dust and into the bedrock before you realize you were wrong.