Story boarding

(by drugmonkey) Jun 23 2015

When you "storyboard" the way a figure or figures for a scientific manuscript should look, or need to look, to make your point, you are on a very slippery slope.

It sets up a situation where you need the data to come out a particular way to fit the story you want to tell.

This leads to all kinds of bad shenanigans. From outright fakery to re-running experiments until you get it to look the way you want. 

Story boarding is for telling fictional stories. 

Science is for telling non-fiction stories. 

These are created after the fact. After the data are collected. With no need for storyboarding the narrative in advance.

32 responses so far

Placeholder figures

(by drugmonkey) Jun 23 2015

Honest scientists do not use "placeholder" images when creating manuscript figures. Period. 

See this nonsense from Cell

22 responses so far

Thought of the Day

(by drugmonkey) Jun 22 2015

I cannot tell you how comforting it is to know that no matter the depths and pedantry of my grant geekery, there is always a certain person to be found digging away furiously below me.

9 responses so far

(in)visible at scientific meetings

(by drugmonkey) Jun 22 2015

Some scientists prefer to occupy scientific meeting space as the proverbial fly on the wall.

Rarely, if ever, comment at the microphone. They are not to be found gesticulating wildly to a small group of peers around the coffee table.

Others loom large. Constantly at the microphone for comment. Glad handing their buddies in every room before and after the session. Buttonholing POs at the slightest opportunity.

Someone just pointed this out to me, so I've been thinking about it.

Obviously nobody wants to end up being seen as a narcissistic blowhard who can't shut up and never has anything useful to say. 

But it is good to be known in your field*. And meeting visibility is part of that.   
__

*Cause and effect may not be simple here, I will acknowledge.

28 responses so far

Gender smog in grant review

(by drugmonkey) Jun 19 2015

I noticed something really weird and totally unnecessary.

When you are asked to review grants for the NIH you are frequently sent a Word document review template that has the Five Criteria nicely outlined and a box for you to start writing your bullet points. At the header to each section it sometimes includes some of the wording about how you are supposed to approach each criterion.

A recent template I received says under Investigator that one is to describe how the

..investigator’s experience and qualifications make him particularly well-suited for his roles in the project?

Grrr.

12 responses so far

Good problems

(by drugmonkey) Jun 19 2015

5 responses so far

The Germain nonsense on fixing the NIH

(by drugmonkey) Jun 19 2015

I know you guys want to talk about this ridiculous commentary because the blog ephone has been ringing off the hook. Unfortunately I really don't have the time for a proper post.

Discuss 

UPDATE: One thing I noticed about the proposal that merits a little more....specific discussion.

I believe the NIH should transition to a system that links getting a first job (faculty appointment) with sufficient funding to support a reasonably sized laboratory (three to five people, including the PI) in terms of staff salaries and supplies

Obviously there is a big range in terms of types of staff and the amounts that they are paid. However, I think we can start with the salaries of a 0 experience postdoc on NRSA scale ($42,840) and a 4 year postdoc (50,112). I am going to use $100,000 as the PI salary.

Benefits can range from 25% to 50% (again, as a rough approximation based on my limited experience with such numbers) which brings us to $241,190 or $289,428 per year for a three person laboratory. That is salary cost only. Obviously types of research vary tremendously but I have heard numbers in the range of 60% to 80% of research grant costs going to support staff salaries. Before we get into that, let's raise the estimates to Germain's upper bound of a lab of 5 individuals, the PI as above and two of each experience level postdocs ($357,380 and $428,856, depending on benefit rate).

With this estimate, if the staff cost is 80%, this brings us to the $357,380-$428,856 per year range. If staff cost is 60% of the research grant expenditure, then $595,633 - $714,760 range.

I invite you to compare these numbers, which Germain is recommending for 5-7 years starting presumably from Day 1, with the funding trajectories of yourself and your peers. At the upper bound, three modular R01s worth of funding for the entire duration of the pre-tenure interval.

This call is for a LOT fewer noob Assistant Professors being allowed to get in the game, by my calculation. Either that, a huge Congressional increase in the NIH budget or a massive retirement of those who are already in the game.

Note that I too would love to see that be possible. It would be fantastic if everyone could get three grants worth of funding to do whatever the heck they wanted, right from the start.

But in the real actual non-fantasy world, that would come with some serious constraints on who can be a scientist.

And I do not like people like Germain's ideas on who those people should be.

59 responses so far

Scientists around the campfire sing kumbaya

(by drugmonkey) Jun 14 2015

one of the fantasy vibes I like at scientific meetings is the sense we're all pulling together towards the same ends. In harmony. As a team. 

We have the same mountain to climb, the same dragons to slay and we're all just happy to play our part. 

The science is the thing.
We're not in competition and we aren't  seething mad about the grant or paper review this peer right in front of us is suspected of writing. 

We even pretend to think all of our peers' models, questions, theories and  findings are highly valuable. 

It's a good feeling to pretend, if only for a little while. 

Kumbaya, My Lord. Kumbaya.

13 responses so far

Re-Repost: The funding is the science II, "Why do they always drop the females?"

(by drugmonkey) Jun 11 2015

The NIH has recently issued the first round of guidance on inclusion of Sex as a Biological Variable in future NIH research grants. I am completely behind the spirit of the initiative but I have concerns about how well this is going to work in practice. I wrote a post in 2008 that detailed some of the reasons that have brought us to the situation where the Director of the NIH felt he had to coauthor an OpEd on this topic. I believe these issues are still present, will not be magically removed with new instructions to reviewers and need to be faced head-on if the NIH is to make any actual progress on ensuring SABV is considered appropriately going forward.

The post originally appeared December 2, 2008.


The title quote came from one of my early, and highly formative, experiences on study section. In the course of discussing a revised application it emerged that the prior version of the application had included a sex comparison. The PI had chosen to delete that part of the design in the revised application, prompting one of the experienced members of the panel to ask, quite rhetorically, "Why do they always drop the females?"

I was reminded of this when reading over Dr. Isis' excellent post [Update: Original Sb post lost, I think the repost can be found here] on the, shall we say less pernicious, ways that the course of science is slanted toward doing male-based research. Really, go read that post before you continue here, it is a fantastic description.

What really motivated me, however, was a comment from the always insightful Stephanie Z:

Thank you. That's the first time I've seen someone address the reasons behind ongoing gender disparities in health research. I still can't say as it thrills me (or you, obviously), but I understand a bit better now.

Did somebody ring?

As I pointed out explicitly at least once ([Update: Original 2007 post]), research funding has a huge role in what science actually gets conducted. Huge. In my book this means that if one feels that an area of science is being systematically overlooked or minimized, one might want to take a close look at the manner by which science is funded and the way by which science careers are sustained as potential avenues for systematic remedy.

Funding

There are a couple of ways in which the generalized problems with NIH grant review lead to the rhetorical comment with which I opened the post. One very common StockCritique of NIH grant review is that of an "over ambitious" research plan. As nicely detailed in Isis' post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not "well established" and needs to be "compared to the standard gonadectomy models", LOL >sigh<.

manWomanControlPanel.jpg
Grant reviewers prefer simplicity
Keep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as "does this drug compound ameliorate cocaine addiction?" So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost? Trust me, the grant proposal is going to take fire for "scope of the project".

Another StockCritique to blame is "feasibility". Two points here really. First is the question of Preliminary Data- of course if you have to run more experimental conditions to establish that you might have a meritorious hypothesis, you are less likely to do it with a fixed amount of pilot/startup/leftover money. Better to work on preliminary data for two or three distinct applications over just one if you have the funds. Second aspect has to do with a given PIs experience with the models in question. More opportunity to say "The PI has no idea what s/he is doing methodologically" if s/he has no prior background with the experimental conditions, which are almost always the female-related ones. As we all know, it matters little that the hormonal assays or gonadectomy or whatever procedures have been published endlessly if you don't have direct evidence that you can do it. Of course, more latitude is extended to the more-experienced investigator....but then s/he is less likely to jump into gender-comparisons in a sustained way in contrast to a newly minted PI.

Then there are the various things under grantspersonship. You have limited space in a given type of grant application. The more groups and comparisons, the more you have to squeeze in with respect to basic designs, methods and the interpretation/alternative approaches part. So of course you leave big windows for critiques of "hasn't fully considered...." and "it is not entirely clear how the PI will do..." and "how the hypothesis will be evaluated has not been sufficiently detailed...".

Career

Although research funding plays a huge role in career success, it is only part of the puzzle. Another critical factor is what we consider to be "great" or "exciting" science in our respective fields.

The little people can fill in the details. This is basically the approach of GlamourMagz science. (This is a paraphrase of something the most successful GlamourMagz PI I know actually says.) Cool, fast and hot is not compatible with the metastasizing of experimental conditions that is an inevitable feature of gender-comparison science. Trouble is, this approach tends to trickle down in various guises. Lower (than GlamourMag) impact factor journals sometimes try to upgrade by becoming more NS-like (Hi, J Neuro!). Meticulous science and exacting experimental designs are only respected (if at all) after the fact. Late(r) in someone's career they start getting props on their grant reviews for this. Early? Well the person hasn't yet shown the necessity and profit for the exhaustive designs and instead they just look...unproductive. Like they haven't really shown anything yet.

As we all know splashy CNS pubs on the CV trump a sustained area of contribution in lower journals six ways to Sunday. This is not to say that nobody will appreciate the meticulous approach, they will. Just to say that high IF journal pubs will trump. Always.

So the smart young PI is going to stay away from those messy sex-differences studies. Everything tells her she should. If he does dip a toe, he's more likely to pay a nasty career price.
This is why NIH efforts to promote sex-comparison studies are necessary. Promoting special funding opportunities are the only way to tip the equation even slightly more favorable to the sex-differences side. The lure of the RFA is enough to persuade the experienced PI to write in the female groups. To convince the new PI that she might just risk it this one time.

My suspicion is that it is not enough. Beyond the simple need to take a stepwise approach to the science as detailed by Isis, the career and funding pressures are irresistible forces.

9 responses so far

Professional Differences

(by drugmonkey) Jun 10 2015

In real science, i.e., that that includes variability around a central tendency, we deal with uncertainty.

We believe, however, that there IS a central tendency, an approximate truth, a phenomenon or effect. But we understand that any single viewpoint, datum or even whole study may only reflect some part of a larger distribution. That part may or may not always give an accurate viewpoint on the central tendency.

So we have professional standards in place that attempt to honestly reflect this variable reality.

Most simply, we present the central tendency of effects (e.g., mean, median or mode) and some indication of variability around that central tendency (standard error, interquartile range, etc).

Even when we present a single observation (such as a pretty picture of a kidney or brain slice all hilighted up with immunohistochemical tags) we assert that the image is representative. This statement means that this individual image has been judged to be close to the central tendency of the images that were used to generate the distributional estimates that contribute to the numerical central tendency and variability graphs / tables presented.

Now look, I understand that it is a bit of a joke. There are abundant cracks and redefinitions that point out that the "most representative image" really means "the image that best makes our desired point".

There is a critically important point here. Our profession does not validate least representative image as an acceptable standard. Our professional standards say that it really should be representative if we ever present N=1 observations as data.

The alleged profession of journalism does not concern itself with truth and representativeness at all.

Their professional ethical standards, to the extent they exist, focus on whether the N=1 actually occurred AT ALL. In addition it focuses on whether that datum was collected fairly by their rules- i.e., was the quote on the record. Accuracy, again for the alleged profession, focuses only on episodic truth. Did this interviewee literally string these words together in this order at some point in time during the interview? If so, then the quote is accurate. And can be used in a published work to support the notion that this is what that interviewee saw, experienced or believes.

It is entirely irrelevant to the profession of journalism if that accident of strung-together words communicates the best possible representation of the truth of what that person saw, experienced or believes. Truth, in this sense, is not the primary professional ethical concern of journalism.

If the journalist pulls a quote out of an hour of conversation that best fits their pre-existing agenda with respect to the story they are planning to tell, it literally does not matter if every other sentence spoken by that person tells a different tale. It's totally okay because that interviewee literally said those words in that order on the record (and it is on tape!).

If a scientist processes twenty brains in the experiement, grabs the one outlier that tells the story they want to tell, trashes the 19 that say the opposite and calls it a representative image (even if by inference if not directly)....this is fraud and data fakery. Not okay. Clearly outside the professional bounds.

That, my friends, is the difference.

And this is why you should only agree to talk to journalists* that will send you a nearly final draft of their piece to ensure that you have been represented accurately.

If every single one of us scientists insisted on this, it would go a long way to snapping the alleged profession into line. And greatly improve the accurate communication of scientific findings and understandings to nonspecialist** audiences.

__
Representative image from here.

*They exist! I have interacted with more than one of these myself.

**Reminder, we ourselves are nonspecialist consumers of much of the science-media. We have two interests here.

21 responses so far

« Newer posts Older posts »